Test Report: Docker_Linux_containerd_arm64 18585

                    
                      649852bcd007960ac9edddddae8235c4914b1566:2024-04-08:33941
                    
                

Test fail (8/335)

x
+
TestAddons/parallel/Ingress (38.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-038955 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-038955 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-038955 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6dcb0354-2611-436f-af41-9ea02c99e8d1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6dcb0354-2611-436f-af41-9ea02c99e8d1] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003869875s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-038955 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.06724966s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-038955 addons disable ingress-dns --alsologtostderr -v=1: (1.579328336s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-038955 addons disable ingress --alsologtostderr -v=1: (7.785235503s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-038955
helpers_test.go:235: (dbg) docker inspect addons-038955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4efea5696ea0b40003d5fd5e7c5da9b695aabce4483262463f406dab53327987",
	        "Created": "2024-04-08T18:43:50.919260005Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 845198,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-08T18:43:51.160839022Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8071b9dd214010e53befdd8360b63c717c30e750b027ce9f279f5c79f4d48a44",
	        "ResolvConfPath": "/var/lib/docker/containers/4efea5696ea0b40003d5fd5e7c5da9b695aabce4483262463f406dab53327987/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4efea5696ea0b40003d5fd5e7c5da9b695aabce4483262463f406dab53327987/hostname",
	        "HostsPath": "/var/lib/docker/containers/4efea5696ea0b40003d5fd5e7c5da9b695aabce4483262463f406dab53327987/hosts",
	        "LogPath": "/var/lib/docker/containers/4efea5696ea0b40003d5fd5e7c5da9b695aabce4483262463f406dab53327987/4efea5696ea0b40003d5fd5e7c5da9b695aabce4483262463f406dab53327987-json.log",
	        "Name": "/addons-038955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-038955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-038955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3e04d182fc902be68e612a64b762064e5d92751d115daf58f50a98c57dcbe38b-init/diff:/var/lib/docker/overlay2/56d7d8514c63dab1b3fb6d26c1f92815f34275e9a0ff6f17f417c17da312f7ae/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e04d182fc902be68e612a64b762064e5d92751d115daf58f50a98c57dcbe38b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e04d182fc902be68e612a64b762064e5d92751d115daf58f50a98c57dcbe38b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e04d182fc902be68e612a64b762064e5d92751d115daf58f50a98c57dcbe38b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-038955",
	                "Source": "/var/lib/docker/volumes/addons-038955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-038955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-038955",
	                "name.minikube.sigs.k8s.io": "addons-038955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d4533e8f783e2acb2ab234a2a86aa94f9a6ed3d1f60e251f4cdf11896a79004",
	            "SandboxKey": "/var/run/docker/netns/4d4533e8f783",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33565"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33564"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33563"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-038955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "3de74c8ce9b2d3a4060ebb4a143eff35923a36e02504ace7eaf2141f6c28cb29",
	                    "EndpointID": "39e7fd643567c964b21a3f7ee42d64a9173299f8bc524f9c586ec88947bb25ae",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-038955",
	                        "4efea5696ea0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-038955 -n addons-038955
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-038955 logs -n 25: (1.830336222s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-871512              | download-only-871512   | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| start   | -o=json --download-only              | download-only-938784   | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |                     |
	|         | -p download-only-938784              |                        |         |                |                     |                     |
	|         | --force --alsologtostderr            |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3         |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| delete  | -p download-only-938784              | download-only-938784   | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| start   | -o=json --download-only              | download-only-674856   | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |                     |
	|         | -p download-only-674856              |                        |         |                |                     |                     |
	|         | --force --alsologtostderr            |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1    |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| delete  | -p download-only-674856              | download-only-674856   | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| delete  | -p download-only-871512              | download-only-871512   | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| delete  | -p download-only-938784              | download-only-938784   | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| delete  | -p download-only-674856              | download-only-674856   | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| start   | --download-only -p                   | download-docker-887640 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |                     |
	|         | download-docker-887640               |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | -p download-docker-887640            | download-docker-887640 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| start   | --download-only -p                   | binary-mirror-809558   | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |                     |
	|         | binary-mirror-809558                 |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --binary-mirror                      |                        |         |                |                     |                     |
	|         | http://127.0.0.1:35483               |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-809558              | binary-mirror-809558   | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| addons  | enable dashboard -p                  | addons-038955          | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |                     |
	|         | addons-038955                        |                        |         |                |                     |                     |
	| addons  | disable dashboard -p                 | addons-038955          | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |                     |
	|         | addons-038955                        |                        |         |                |                     |                     |
	| start   | -p addons-038955 --wait=true         | addons-038955          | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:45 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |                |                     |                     |
	|         | --addons=registry                    |                        |         |                |                     |                     |
	|         | --addons=metrics-server              |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --addons=ingress                     |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |                |                     |                     |
	| ip      | addons-038955 ip                     | addons-038955          | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:46 UTC | 08 Apr 24 18:46 UTC |
	| addons  | addons-038955 addons disable         | addons-038955          | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:46 UTC | 08 Apr 24 18:46 UTC |
	|         | registry --alsologtostderr           |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-038955 addons                 | addons-038955          | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:46 UTC | 08 Apr 24 18:46 UTC |
	|         | disable metrics-server               |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-038955          | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:46 UTC | 08 Apr 24 18:46 UTC |
	|         | addons-038955                        |                        |         |                |                     |                     |
	| ssh     | addons-038955 ssh curl -s            | addons-038955          | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:46 UTC | 08 Apr 24 18:46 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |                |                     |                     |
	|         | nginx.example.com'                   |                        |         |                |                     |                     |
	| ip      | addons-038955 ip                     | addons-038955          | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:46 UTC | 08 Apr 24 18:46 UTC |
	| addons  | addons-038955 addons disable         | addons-038955          | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:46 UTC | 08 Apr 24 18:46 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-038955 addons disable         | addons-038955          | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:46 UTC | 08 Apr 24 18:46 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |                |                     |                     |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 18:43:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 18:43:27.349730  844748 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:43:27.349908  844748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:43:27.349929  844748 out.go:304] Setting ErrFile to fd 2...
	I0408 18:43:27.349949  844748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:43:27.350236  844748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 18:43:27.350716  844748 out.go:298] Setting JSON to false
	I0408 18:43:27.351788  844748 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12352,"bootTime":1712589456,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0408 18:43:27.351898  844748 start.go:139] virtualization:  
	I0408 18:43:27.355309  844748 out.go:177] * [addons-038955] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0408 18:43:27.358042  844748 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 18:43:27.360493  844748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:43:27.358143  844748 notify.go:220] Checking for updates...
	I0408 18:43:27.365141  844748 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 18:43:27.367110  844748 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	I0408 18:43:27.369024  844748 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0408 18:43:27.370745  844748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 18:43:27.373542  844748 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 18:43:27.395804  844748 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0408 18:43:27.395928  844748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 18:43:27.458303  844748 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-08 18:43:27.449540007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 18:43:27.458463  844748 docker.go:295] overlay module found
	I0408 18:43:27.460639  844748 out.go:177] * Using the docker driver based on user configuration
	I0408 18:43:27.462483  844748 start.go:297] selected driver: docker
	I0408 18:43:27.462504  844748 start.go:901] validating driver "docker" against <nil>
	I0408 18:43:27.462519  844748 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 18:43:27.463125  844748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 18:43:27.515441  844748 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-08 18:43:27.505418884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 18:43:27.515635  844748 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 18:43:27.515873  844748 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 18:43:27.517642  844748 out.go:177] * Using Docker driver with root privileges
	I0408 18:43:27.519270  844748 cni.go:84] Creating CNI manager for ""
	I0408 18:43:27.519293  844748 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 18:43:27.519304  844748 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 18:43:27.519408  844748 start.go:340] cluster config:
	{Name:addons-038955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-038955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:43:27.521448  844748 out.go:177] * Starting "addons-038955" primary control-plane node in "addons-038955" cluster
	I0408 18:43:27.523605  844748 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0408 18:43:27.525852  844748 out.go:177] * Pulling base image v0.0.43-1712593525-18585 ...
	I0408 18:43:27.527613  844748 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local docker daemon
	I0408 18:43:27.527564  844748 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 18:43:27.527689  844748 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0408 18:43:27.527699  844748 cache.go:56] Caching tarball of preloaded images
	I0408 18:43:27.527781  844748 preload.go:173] Found /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 18:43:27.527790  844748 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0408 18:43:27.528131  844748 profile.go:143] Saving config to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/config.json ...
	I0408 18:43:27.528151  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/config.json: {Name:mkbebd92cc88776bb7f3fc8706eab6e255ffd2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:43:27.542823  844748 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd to local cache
	I0408 18:43:27.542947  844748 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local cache directory
	I0408 18:43:27.542971  844748 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local cache directory, skipping pull
	I0408 18:43:27.542976  844748 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd exists in cache, skipping pull
	I0408 18:43:27.542985  844748 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd as a tarball
	I0408 18:43:27.542993  844748 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd from local cache
	I0408 18:43:43.707055  844748 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd from cached tarball
	I0408 18:43:43.707091  844748 cache.go:194] Successfully downloaded all kic artifacts
	I0408 18:43:43.707119  844748 start.go:360] acquireMachinesLock for addons-038955: {Name:mk836bd8052fab293636b8458ef6909f307919bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 18:43:43.707708  844748 start.go:364] duration metric: took 564.663µs to acquireMachinesLock for "addons-038955"
	I0408 18:43:43.707743  844748 start.go:93] Provisioning new machine with config: &{Name:addons-038955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-038955 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0408 18:43:43.707835  844748 start.go:125] createHost starting for "" (driver="docker")
	I0408 18:43:43.710369  844748 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0408 18:43:43.710622  844748 start.go:159] libmachine.API.Create for "addons-038955" (driver="docker")
	I0408 18:43:43.710656  844748 client.go:168] LocalClient.Create starting
	I0408 18:43:43.710772  844748 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem
	I0408 18:43:43.972143  844748 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem
	I0408 18:43:44.566336  844748 cli_runner.go:164] Run: docker network inspect addons-038955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0408 18:43:44.580600  844748 cli_runner.go:211] docker network inspect addons-038955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0408 18:43:44.580684  844748 network_create.go:281] running [docker network inspect addons-038955] to gather additional debugging logs...
	I0408 18:43:44.580712  844748 cli_runner.go:164] Run: docker network inspect addons-038955
	W0408 18:43:44.594613  844748 cli_runner.go:211] docker network inspect addons-038955 returned with exit code 1
	I0408 18:43:44.594649  844748 network_create.go:284] error running [docker network inspect addons-038955]: docker network inspect addons-038955: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-038955 not found
	I0408 18:43:44.594661  844748 network_create.go:286] output of [docker network inspect addons-038955]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-038955 not found
	
	** /stderr **
	I0408 18:43:44.594775  844748 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0408 18:43:44.608929  844748 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002a3cd00}
	I0408 18:43:44.608973  844748 network_create.go:124] attempt to create docker network addons-038955 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0408 18:43:44.609030  844748 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-038955 addons-038955
	I0408 18:43:44.670291  844748 network_create.go:108] docker network addons-038955 192.168.49.0/24 created
	I0408 18:43:44.670323  844748 kic.go:121] calculated static IP "192.168.49.2" for the "addons-038955" container
	I0408 18:43:44.670395  844748 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0408 18:43:44.684183  844748 cli_runner.go:164] Run: docker volume create addons-038955 --label name.minikube.sigs.k8s.io=addons-038955 --label created_by.minikube.sigs.k8s.io=true
	I0408 18:43:44.699380  844748 oci.go:103] Successfully created a docker volume addons-038955
	I0408 18:43:44.699483  844748 cli_runner.go:164] Run: docker run --rm --name addons-038955-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-038955 --entrypoint /usr/bin/test -v addons-038955:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd -d /var/lib
	I0408 18:43:46.709473  844748 cli_runner.go:217] Completed: docker run --rm --name addons-038955-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-038955 --entrypoint /usr/bin/test -v addons-038955:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd -d /var/lib: (2.009948177s)
	I0408 18:43:46.709519  844748 oci.go:107] Successfully prepared a docker volume addons-038955
	I0408 18:43:46.709554  844748 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 18:43:46.709577  844748 kic.go:194] Starting extracting preloaded images to volume ...
	I0408 18:43:46.709661  844748 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-038955:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd -I lz4 -xf /preloaded.tar -C /extractDir
	I0408 18:43:50.852931  844748 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-038955:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd -I lz4 -xf /preloaded.tar -C /extractDir: (4.143232722s)
	I0408 18:43:50.852965  844748 kic.go:203] duration metric: took 4.143384587s to extract preloaded images to volume ...
	W0408 18:43:50.853103  844748 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0408 18:43:50.853218  844748 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0408 18:43:50.906607  844748 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-038955 --name addons-038955 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-038955 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-038955 --network addons-038955 --ip 192.168.49.2 --volume addons-038955:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd
	I0408 18:43:51.169303  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Running}}
	I0408 18:43:51.192665  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:43:51.212715  844748 cli_runner.go:164] Run: docker exec addons-038955 stat /var/lib/dpkg/alternatives/iptables
	I0408 18:43:51.269923  844748 oci.go:144] the created container "addons-038955" has a running status.
	I0408 18:43:51.269950  844748 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa...
	I0408 18:43:52.380835  844748 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0408 18:43:52.397742  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:43:52.411946  844748 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0408 18:43:52.411971  844748 kic_runner.go:114] Args: [docker exec --privileged addons-038955 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0408 18:43:52.458765  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:43:52.473261  844748 machine.go:94] provisionDockerMachine start ...
	I0408 18:43:52.473378  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:43:52.489255  844748 main.go:141] libmachine: Using SSH client type: native
	I0408 18:43:52.489534  844748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33565 <nil> <nil>}
	I0408 18:43:52.489548  844748 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 18:43:52.625453  844748 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-038955
	
	I0408 18:43:52.625479  844748 ubuntu.go:169] provisioning hostname "addons-038955"
	I0408 18:43:52.625543  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:43:52.642660  844748 main.go:141] libmachine: Using SSH client type: native
	I0408 18:43:52.642915  844748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33565 <nil> <nil>}
	I0408 18:43:52.642933  844748 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-038955 && echo "addons-038955" | sudo tee /etc/hostname
	I0408 18:43:52.793369  844748 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-038955
	
	I0408 18:43:52.793446  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:43:52.809438  844748 main.go:141] libmachine: Using SSH client type: native
	I0408 18:43:52.809704  844748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33565 <nil> <nil>}
	I0408 18:43:52.809721  844748 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-038955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-038955/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-038955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 18:43:52.946092  844748 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 18:43:52.946118  844748 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18585-838483/.minikube CaCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18585-838483/.minikube}
	I0408 18:43:52.946146  844748 ubuntu.go:177] setting up certificates
	I0408 18:43:52.946160  844748 provision.go:84] configureAuth start
	I0408 18:43:52.946237  844748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-038955
	I0408 18:43:52.962471  844748 provision.go:143] copyHostCerts
	I0408 18:43:52.962560  844748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/ca.pem (1082 bytes)
	I0408 18:43:52.962686  844748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/cert.pem (1123 bytes)
	I0408 18:43:52.962747  844748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/key.pem (1675 bytes)
	I0408 18:43:52.962799  844748 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem org=jenkins.addons-038955 san=[127.0.0.1 192.168.49.2 addons-038955 localhost minikube]
	I0408 18:43:54.641677  844748 provision.go:177] copyRemoteCerts
	I0408 18:43:54.641761  844748 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 18:43:54.641804  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:43:54.658488  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:43:54.756691  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 18:43:54.782679  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 18:43:54.808131  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 18:43:54.831861  844748 provision.go:87] duration metric: took 1.885680113s to configureAuth
	I0408 18:43:54.831885  844748 ubuntu.go:193] setting minikube options for container-runtime
	I0408 18:43:54.832064  844748 config.go:182] Loaded profile config "addons-038955": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:43:54.832075  844748 machine.go:97] duration metric: took 2.358792061s to provisionDockerMachine
	I0408 18:43:54.832082  844748 client.go:171] duration metric: took 11.121417128s to LocalClient.Create
	I0408 18:43:54.832100  844748 start.go:167] duration metric: took 11.121479033s to libmachine.API.Create "addons-038955"
	I0408 18:43:54.832114  844748 start.go:293] postStartSetup for "addons-038955" (driver="docker")
	I0408 18:43:54.832123  844748 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 18:43:54.832185  844748 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 18:43:54.832227  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:43:54.847421  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:43:54.946897  844748 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 18:43:54.950824  844748 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0408 18:43:54.950861  844748 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0408 18:43:54.950881  844748 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0408 18:43:54.950889  844748 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0408 18:43:54.950898  844748 filesync.go:126] Scanning /home/jenkins/minikube-integration/18585-838483/.minikube/addons for local assets ...
	I0408 18:43:54.950958  844748 filesync.go:126] Scanning /home/jenkins/minikube-integration/18585-838483/.minikube/files for local assets ...
	I0408 18:43:54.950986  844748 start.go:296] duration metric: took 118.866342ms for postStartSetup
	I0408 18:43:54.951292  844748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-038955
	I0408 18:43:54.966162  844748 profile.go:143] Saving config to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/config.json ...
	I0408 18:43:54.966495  844748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:43:54.966546  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:43:54.981669  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:43:55.075199  844748 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0408 18:43:55.080118  844748 start.go:128] duration metric: took 11.37226718s to createHost
	I0408 18:43:55.080145  844748 start.go:83] releasing machines lock for "addons-038955", held for 11.372421974s
	I0408 18:43:55.080227  844748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-038955
	I0408 18:43:55.096974  844748 ssh_runner.go:195] Run: cat /version.json
	I0408 18:43:55.097038  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:43:55.097302  844748 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 18:43:55.097346  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:43:55.117719  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:43:55.126101  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:43:55.209937  844748 ssh_runner.go:195] Run: systemctl --version
	I0408 18:43:55.323711  844748 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 18:43:55.328647  844748 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0408 18:43:55.352000  844748 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0408 18:43:55.352107  844748 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 18:43:55.379514  844748 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0408 18:43:55.379536  844748 start.go:494] detecting cgroup driver to use...
	I0408 18:43:55.379592  844748 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0408 18:43:55.379662  844748 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 18:43:55.391834  844748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 18:43:55.403431  844748 docker.go:217] disabling cri-docker service (if available) ...
	I0408 18:43:55.403494  844748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 18:43:55.417378  844748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 18:43:55.431896  844748 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 18:43:55.526946  844748 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 18:43:55.619685  844748 docker.go:233] disabling docker service ...
	I0408 18:43:55.619761  844748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 18:43:55.640374  844748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 18:43:55.651535  844748 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 18:43:55.742304  844748 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 18:43:55.829364  844748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 18:43:55.840735  844748 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 18:43:55.856585  844748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0408 18:43:55.866319  844748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 18:43:55.876202  844748 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 18:43:55.876309  844748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 18:43:55.886404  844748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 18:43:55.896420  844748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 18:43:55.905972  844748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 18:43:55.916135  844748 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 18:43:55.925183  844748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 18:43:55.934865  844748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 18:43:55.944173  844748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 18:43:55.953928  844748 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 18:43:55.962281  844748 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 18:43:55.970821  844748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 18:43:56.051412  844748 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 18:43:56.180988  844748 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0408 18:43:56.181117  844748 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0408 18:43:56.184574  844748 start.go:562] Will wait 60s for crictl version
	I0408 18:43:56.184674  844748 ssh_runner.go:195] Run: which crictl
	I0408 18:43:56.187883  844748 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 18:43:56.222974  844748 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0408 18:43:56.223093  844748 ssh_runner.go:195] Run: containerd --version
	I0408 18:43:56.244374  844748 ssh_runner.go:195] Run: containerd --version
	I0408 18:43:56.268313  844748 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0408 18:43:56.272034  844748 cli_runner.go:164] Run: docker network inspect addons-038955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0408 18:43:56.285761  844748 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0408 18:43:56.289311  844748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 18:43:56.299733  844748 kubeadm.go:877] updating cluster {Name:addons-038955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-038955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 18:43:56.299858  844748 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 18:43:56.299918  844748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 18:43:56.338409  844748 containerd.go:627] all images are preloaded for containerd runtime.
	I0408 18:43:56.338431  844748 containerd.go:534] Images already preloaded, skipping extraction
	I0408 18:43:56.338490  844748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 18:43:56.373150  844748 containerd.go:627] all images are preloaded for containerd runtime.
	I0408 18:43:56.373173  844748 cache_images.go:84] Images are preloaded, skipping loading
	I0408 18:43:56.373182  844748 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 containerd true true} ...
	I0408 18:43:56.373280  844748 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-038955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-038955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 18:43:56.373349  844748 ssh_runner.go:195] Run: sudo crictl info
	I0408 18:43:56.414584  844748 cni.go:84] Creating CNI manager for ""
	I0408 18:43:56.414604  844748 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 18:43:56.414613  844748 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 18:43:56.414634  844748 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-038955 NodeName:addons-038955 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 18:43:56.414763  844748 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-038955"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 18:43:56.414829  844748 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 18:43:56.423704  844748 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 18:43:56.423799  844748 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 18:43:56.432506  844748 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0408 18:43:56.451304  844748 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 18:43:56.469283  844748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0408 18:43:56.487703  844748 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0408 18:43:56.491300  844748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 18:43:56.501847  844748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 18:43:56.582511  844748 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 18:43:56.596466  844748 certs.go:68] Setting up /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955 for IP: 192.168.49.2
	I0408 18:43:56.596534  844748 certs.go:194] generating shared ca certs ...
	I0408 18:43:56.596564  844748 certs.go:226] acquiring lock for ca certs: {Name:mkee58842a3256e0a530a93e9e38afd9941f0741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:43:56.597226  844748 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18585-838483/.minikube/ca.key
	I0408 18:43:57.074597  844748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-838483/.minikube/ca.crt ...
	I0408 18:43:57.074628  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/ca.crt: {Name:mkce4ae0e832b12e37e26bf9d6471edd5e1a78fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:43:57.075467  844748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-838483/.minikube/ca.key ...
	I0408 18:43:57.075488  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/ca.key: {Name:mk62042a661c22b7a441e1fd28756a85df9976fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:43:57.076189  844748 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.key
	I0408 18:43:57.843408  844748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.crt ...
	I0408 18:43:57.843438  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.crt: {Name:mkf0e82f58f0b32f8ae550cae25c7a02c58eb2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:43:57.844070  844748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.key ...
	I0408 18:43:57.844090  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.key: {Name:mkb36bd7859f68bf2eb3f7db086951cb76f86021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:43:57.844191  844748 certs.go:256] generating profile certs ...
	I0408 18:43:57.844253  844748 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.key
	I0408 18:43:57.844271  844748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt with IP's: []
	I0408 18:43:58.472006  844748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt ...
	I0408 18:43:58.472037  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: {Name:mk1c0b16954f49326c6c4bd73b9a323858cfb20b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:43:58.472779  844748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.key ...
	I0408 18:43:58.472797  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.key: {Name:mk8215e6e92c9dae907117f42aa7eac747f4040b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:43:58.472889  844748 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.key.e87cbb6c
	I0408 18:43:58.472909  844748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.crt.e87cbb6c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0408 18:43:59.338363  844748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.crt.e87cbb6c ...
	I0408 18:43:59.338401  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.crt.e87cbb6c: {Name:mkab767fe7ee086249bc7418971194a0f2ae4df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:43:59.338579  844748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.key.e87cbb6c ...
	I0408 18:43:59.338592  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.key.e87cbb6c: {Name:mk5f9de2a9f230f091d420b97c8450c2bf25ce48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:43:59.339144  844748 certs.go:381] copying /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.crt.e87cbb6c -> /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.crt
	I0408 18:43:59.339227  844748 certs.go:385] copying /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.key.e87cbb6c -> /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.key
	I0408 18:43:59.339284  844748 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/proxy-client.key
	I0408 18:43:59.339304  844748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/proxy-client.crt with IP's: []
	I0408 18:44:00.486998  844748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/proxy-client.crt ...
	I0408 18:44:00.487090  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/proxy-client.crt: {Name:mk7f26bcb9d235afb41476d424356a7d85984d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:44:00.487369  844748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/proxy-client.key ...
	I0408 18:44:00.487411  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/proxy-client.key: {Name:mkdf11affdf32eb260a2877a3d6eb865b040dbd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:44:00.487732  844748 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 18:44:00.487848  844748 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem (1082 bytes)
	I0408 18:44:00.487910  844748 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem (1123 bytes)
	I0408 18:44:00.487963  844748 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem (1675 bytes)
	I0408 18:44:00.488792  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 18:44:00.518834  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 18:44:00.548376  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 18:44:00.576298  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 18:44:00.603465  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0408 18:44:00.629190  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 18:44:00.653673  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 18:44:00.677875  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 18:44:00.702043  844748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 18:44:00.726616  844748 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 18:44:00.744749  844748 ssh_runner.go:195] Run: openssl version
	I0408 18:44:00.750165  844748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 18:44:00.759314  844748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 18:44:00.762756  844748 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0408 18:44:00.762825  844748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 18:44:00.769670  844748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 18:44:00.779254  844748 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 18:44:00.782650  844748 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 18:44:00.782699  844748 kubeadm.go:391] StartCluster: {Name:addons-038955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-038955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:44:00.782783  844748 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0408 18:44:00.782841  844748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 18:44:00.819301  844748 cri.go:89] found id: ""
	I0408 18:44:00.819413  844748 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 18:44:00.828190  844748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 18:44:00.836976  844748 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0408 18:44:00.837063  844748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 18:44:00.845874  844748 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 18:44:00.845894  844748 kubeadm.go:156] found existing configuration files:
	
	I0408 18:44:00.845967  844748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 18:44:00.854678  844748 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 18:44:00.854761  844748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 18:44:00.862973  844748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 18:44:00.871812  844748 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 18:44:00.871879  844748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 18:44:00.880337  844748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 18:44:00.889101  844748 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 18:44:00.889187  844748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 18:44:00.897656  844748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 18:44:00.906143  844748 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 18:44:00.906217  844748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 18:44:00.914746  844748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0408 18:44:01.014047  844748 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1056-aws\n", err: exit status 1
	I0408 18:44:01.084122  844748 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 18:44:18.530395  844748 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 18:44:18.530534  844748 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 18:44:18.530643  844748 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0408 18:44:18.530698  844748 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0408 18:44:18.530760  844748 kubeadm.go:309] OS: Linux
	I0408 18:44:18.530817  844748 kubeadm.go:309] CGROUPS_CPU: enabled
	I0408 18:44:18.530906  844748 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0408 18:44:18.530966  844748 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0408 18:44:18.531027  844748 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0408 18:44:18.531089  844748 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0408 18:44:18.531145  844748 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0408 18:44:18.531193  844748 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0408 18:44:18.531259  844748 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0408 18:44:18.531343  844748 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0408 18:44:18.531423  844748 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 18:44:18.531524  844748 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 18:44:18.531613  844748 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 18:44:18.531689  844748 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 18:44:18.534260  844748 out.go:204]   - Generating certificates and keys ...
	I0408 18:44:18.534355  844748 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 18:44:18.534441  844748 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 18:44:18.534513  844748 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 18:44:18.534572  844748 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0408 18:44:18.534640  844748 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0408 18:44:18.534692  844748 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0408 18:44:18.534758  844748 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0408 18:44:18.534874  844748 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-038955 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0408 18:44:18.534927  844748 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0408 18:44:18.535041  844748 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-038955 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0408 18:44:18.535107  844748 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 18:44:18.535171  844748 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 18:44:18.535222  844748 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0408 18:44:18.535280  844748 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 18:44:18.535333  844748 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 18:44:18.535390  844748 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 18:44:18.535445  844748 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 18:44:18.535508  844748 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 18:44:18.535563  844748 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 18:44:18.535644  844748 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 18:44:18.535711  844748 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 18:44:18.537712  844748 out.go:204]   - Booting up control plane ...
	I0408 18:44:18.537829  844748 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 18:44:18.537933  844748 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 18:44:18.538075  844748 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 18:44:18.538238  844748 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 18:44:18.538365  844748 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 18:44:18.538446  844748 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 18:44:18.538655  844748 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 18:44:18.538742  844748 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.502618 seconds
	I0408 18:44:18.538853  844748 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 18:44:18.538979  844748 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 18:44:18.539053  844748 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 18:44:18.539267  844748 kubeadm.go:309] [mark-control-plane] Marking the node addons-038955 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 18:44:18.539342  844748 kubeadm.go:309] [bootstrap-token] Using token: mw9ett.dpt73aq7e732jixc
	I0408 18:44:18.540931  844748 out.go:204]   - Configuring RBAC rules ...
	I0408 18:44:18.541057  844748 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 18:44:18.541145  844748 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 18:44:18.541283  844748 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 18:44:18.541460  844748 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 18:44:18.541637  844748 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 18:44:18.541781  844748 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 18:44:18.541958  844748 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 18:44:18.542058  844748 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 18:44:18.542134  844748 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 18:44:18.542171  844748 kubeadm.go:309] 
	I0408 18:44:18.542263  844748 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 18:44:18.542303  844748 kubeadm.go:309] 
	I0408 18:44:18.542414  844748 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 18:44:18.542453  844748 kubeadm.go:309] 
	I0408 18:44:18.542490  844748 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 18:44:18.542593  844748 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 18:44:18.542648  844748 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 18:44:18.542658  844748 kubeadm.go:309] 
	I0408 18:44:18.542715  844748 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 18:44:18.542719  844748 kubeadm.go:309] 
	I0408 18:44:18.542769  844748 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 18:44:18.542772  844748 kubeadm.go:309] 
	I0408 18:44:18.542828  844748 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 18:44:18.542906  844748 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 18:44:18.542977  844748 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 18:44:18.542981  844748 kubeadm.go:309] 
	I0408 18:44:18.543069  844748 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 18:44:18.543148  844748 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 18:44:18.543153  844748 kubeadm.go:309] 
	I0408 18:44:18.543240  844748 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token mw9ett.dpt73aq7e732jixc \
	I0408 18:44:18.543346  844748 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:40732441685d52f358537af2255d867bbdb5cf15cf08de16fca49474be9f966b \
	I0408 18:44:18.543368  844748 kubeadm.go:309] 	--control-plane 
	I0408 18:44:18.543372  844748 kubeadm.go:309] 
	I0408 18:44:18.543460  844748 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 18:44:18.543464  844748 kubeadm.go:309] 
	I0408 18:44:18.543549  844748 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token mw9ett.dpt73aq7e732jixc \
	I0408 18:44:18.543668  844748 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:40732441685d52f358537af2255d867bbdb5cf15cf08de16fca49474be9f966b 
	I0408 18:44:18.543677  844748 cni.go:84] Creating CNI manager for ""
	I0408 18:44:18.543684  844748 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 18:44:18.545687  844748 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0408 18:44:18.547692  844748 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0408 18:44:18.552527  844748 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0408 18:44:18.552546  844748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0408 18:44:18.590727  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0408 18:44:18.922589  844748 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 18:44:18.922685  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:18.922709  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-038955 minikube.k8s.io/updated_at=2024_04_08T18_44_18_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021 minikube.k8s.io/name=addons-038955 minikube.k8s.io/primary=true
	I0408 18:44:19.060802  844748 ops.go:34] apiserver oom_adj: -16
	I0408 18:44:19.060895  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:19.561008  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:20.061082  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:20.561991  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:21.061559  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:21.561352  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:22.061962  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:22.561372  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:23.062063  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:23.560949  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:24.061657  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:24.561271  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:25.061053  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:25.560994  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:26.061390  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:26.561101  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:27.061270  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:27.561196  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:28.061675  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:28.561358  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:29.061920  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:29.561715  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:30.061082  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:30.561273  844748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:44:30.648716  844748 kubeadm.go:1107] duration metric: took 11.726103367s to wait for elevateKubeSystemPrivileges
	W0408 18:44:30.648756  844748 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 18:44:30.648764  844748 kubeadm.go:393] duration metric: took 29.866070281s to StartCluster
	I0408 18:44:30.648780  844748 settings.go:142] acquiring lock: {Name:mk5026d653ab6560d4c2e7a68e9bc77339a3813a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:44:30.648915  844748 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 18:44:30.649290  844748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/kubeconfig: {Name:mk2667c6d217e28cc639f1cedf47734a14602005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:44:30.649490  844748 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0408 18:44:30.653088  844748 out.go:177] * Verifying Kubernetes components...
	I0408 18:44:30.649627  844748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 18:44:30.649792  844748 config.go:182] Loaded profile config "addons-038955": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:44:30.649802  844748 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0408 18:44:30.655439  844748 addons.go:69] Setting yakd=true in profile "addons-038955"
	I0408 18:44:30.655467  844748 addons.go:234] Setting addon yakd=true in "addons-038955"
	I0408 18:44:30.655504  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.656004  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.656176  844748 addons.go:69] Setting ingress=true in profile "addons-038955"
	I0408 18:44:30.656200  844748 addons.go:234] Setting addon ingress=true in "addons-038955"
	I0408 18:44:30.656240  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.656597  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.656896  844748 addons.go:69] Setting ingress-dns=true in profile "addons-038955"
	I0408 18:44:30.656922  844748 addons.go:234] Setting addon ingress-dns=true in "addons-038955"
	I0408 18:44:30.656948  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.657321  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.657481  844748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 18:44:30.657713  844748 addons.go:69] Setting cloud-spanner=true in profile "addons-038955"
	I0408 18:44:30.657736  844748 addons.go:234] Setting addon cloud-spanner=true in "addons-038955"
	I0408 18:44:30.657758  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.658149  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.660288  844748 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-038955"
	I0408 18:44:30.660353  844748 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-038955"
	I0408 18:44:30.660377  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.660772  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.661580  844748 addons.go:69] Setting inspektor-gadget=true in profile "addons-038955"
	I0408 18:44:30.661609  844748 addons.go:234] Setting addon inspektor-gadget=true in "addons-038955"
	I0408 18:44:30.661637  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.662043  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.663471  844748 addons.go:69] Setting default-storageclass=true in profile "addons-038955"
	I0408 18:44:30.663506  844748 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-038955"
	I0408 18:44:30.663766  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.666123  844748 addons.go:69] Setting metrics-server=true in profile "addons-038955"
	I0408 18:44:30.666159  844748 addons.go:234] Setting addon metrics-server=true in "addons-038955"
	I0408 18:44:30.666276  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.666855  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.675426  844748 addons.go:69] Setting gcp-auth=true in profile "addons-038955"
	I0408 18:44:30.675484  844748 mustload.go:65] Loading cluster: addons-038955
	I0408 18:44:30.675667  844748 config.go:182] Loaded profile config "addons-038955": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:44:30.675916  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.685790  844748 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-038955"
	I0408 18:44:30.685891  844748 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-038955"
	I0408 18:44:30.685946  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.686482  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.710179  844748 addons.go:69] Setting registry=true in profile "addons-038955"
	I0408 18:44:30.710268  844748 addons.go:234] Setting addon registry=true in "addons-038955"
	I0408 18:44:30.710333  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.710825  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.721410  844748 addons.go:69] Setting storage-provisioner=true in profile "addons-038955"
	I0408 18:44:30.757713  844748 addons.go:234] Setting addon storage-provisioner=true in "addons-038955"
	I0408 18:44:30.757797  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.766084  844748 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0408 18:44:30.770207  844748 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0408 18:44:30.770277  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0408 18:44:30.770382  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:30.762433  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.779996  844748 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0408 18:44:30.735895  844748 addons.go:69] Setting volumesnapshots=true in profile "addons-038955"
	I0408 18:44:30.735858  844748 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-038955"
	I0408 18:44:30.782829  844748 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0408 18:44:30.782867  844748 addons.go:234] Setting addon volumesnapshots=true in "addons-038955"
	I0408 18:44:30.785514  844748 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0408 18:44:30.785544  844748 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-038955"
	I0408 18:44:30.788059  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.802486  844748 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0408 18:44:30.800606  844748 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0408 18:44:30.800613  844748 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0408 18:44:30.800618  844748 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0408 18:44:30.800657  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.800669  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0408 18:44:30.805797  844748 addons.go:234] Setting addon default-storageclass=true in "addons-038955"
	I0408 18:44:30.809495  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:30.809814  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.811196  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.815260  844748 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 18:44:30.815423  844748 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 18:44:30.815603  844748 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0408 18:44:30.817598  844748 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 18:44:30.830221  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 18:44:30.830367  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:30.845461  844748 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0408 18:44:30.850713  844748 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0408 18:44:30.853170  844748 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0408 18:44:30.855670  844748 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0408 18:44:30.857989  844748 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0408 18:44:30.855378  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:30.855579  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:30.855603  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0408 18:44:30.855611  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0408 18:44:30.903776  844748 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0408 18:44:30.910488  844748 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0408 18:44:30.891104  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:30.891136  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:30.891180  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:30.918064  844748 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0408 18:44:30.921089  844748 out.go:177]   - Using image docker.io/registry:2.8.3
	I0408 18:44:30.938252  844748 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0408 18:44:30.938275  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0408 18:44:30.938345  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:30.938168  844748 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0408 18:44:30.940505  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0408 18:44:30.940583  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:30.938182  844748 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 18:44:30.986673  844748 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 18:44:30.986696  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0408 18:44:30.986767  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:31.000515  844748 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-038955"
	I0408 18:44:31.000573  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:31.001037  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:31.016357  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.029064  844748 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 18:44:31.020989  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.024981  844748 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0408 18:44:31.038265  844748 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 18:44:31.046302  844748 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 18:44:31.046348  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 18:44:31.046414  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:31.053490  844748 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 18:44:31.053518  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0408 18:44:31.053593  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:31.046720  844748 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0408 18:44:31.046734  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 18:44:31.065950  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:31.066260  844748 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0408 18:44:31.066285  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0408 18:44:31.066330  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:31.092105  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.111078  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.112794  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.113288  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.148275  844748 out.go:177]   - Using image docker.io/busybox:stable
	I0408 18:44:31.152663  844748 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0408 18:44:31.157746  844748 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 18:44:31.157770  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0408 18:44:31.157840  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:31.169221  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.180247  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.188312  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.190594  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.195942  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.211403  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:31.454524  844748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0408 18:44:31.454698  844748 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 18:44:31.759752  844748 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0408 18:44:31.759826  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0408 18:44:31.792614  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 18:44:31.844637  844748 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0408 18:44:31.844708  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0408 18:44:31.869747  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 18:44:31.883349  844748 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 18:44:31.883417  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0408 18:44:31.906460  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 18:44:32.017020  844748 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0408 18:44:32.017091  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0408 18:44:32.037046  844748 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0408 18:44:32.037118  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0408 18:44:32.044464  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 18:44:32.093935  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 18:44:32.099944  844748 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0408 18:44:32.099974  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0408 18:44:32.121214  844748 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0408 18:44:32.121239  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0408 18:44:32.123067  844748 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0408 18:44:32.123090  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0408 18:44:32.127218  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0408 18:44:32.171499  844748 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0408 18:44:32.171524  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0408 18:44:32.181133  844748 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0408 18:44:32.181162  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0408 18:44:32.250396  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 18:44:32.259169  844748 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 18:44:32.259196  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 18:44:32.261898  844748 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0408 18:44:32.261924  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0408 18:44:32.300006  844748 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0408 18:44:32.300034  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0408 18:44:32.345698  844748 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0408 18:44:32.345729  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0408 18:44:32.360539  844748 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0408 18:44:32.362794  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0408 18:44:32.387750  844748 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0408 18:44:32.387774  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0408 18:44:32.439274  844748 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 18:44:32.439301  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 18:44:32.516927  844748 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0408 18:44:32.516953  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0408 18:44:32.587660  844748 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0408 18:44:32.587685  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0408 18:44:32.611938  844748 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0408 18:44:32.611962  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0408 18:44:32.615677  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0408 18:44:32.646913  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0408 18:44:32.779069  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 18:44:32.802325  844748 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0408 18:44:32.802352  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0408 18:44:32.866986  844748 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0408 18:44:32.867013  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0408 18:44:32.998851  844748 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0408 18:44:32.998876  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0408 18:44:33.107125  844748 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 18:44:33.107150  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0408 18:44:33.117689  844748 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0408 18:44:33.117714  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0408 18:44:33.142897  844748 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0408 18:44:33.142921  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0408 18:44:33.299682  844748 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.84493785s)
	I0408 18:44:33.299777  844748 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.845182604s)
	I0408 18:44:33.299797  844748 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0408 18:44:33.301365  844748 node_ready.go:35] waiting up to 6m0s for node "addons-038955" to be "Ready" ...
	I0408 18:44:33.305209  844748 node_ready.go:49] node "addons-038955" has status "Ready":"True"
	I0408 18:44:33.305238  844748 node_ready.go:38] duration metric: took 3.845101ms for node "addons-038955" to be "Ready" ...
	I0408 18:44:33.305250  844748 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 18:44:33.318292  844748 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-crbt7" in "kube-system" namespace to be "Ready" ...
	I0408 18:44:33.373216  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 18:44:33.490099  844748 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0408 18:44:33.490123  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0408 18:44:33.536083  844748 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0408 18:44:33.536108  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0408 18:44:33.570101  844748 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0408 18:44:33.570124  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0408 18:44:33.678409  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0408 18:44:33.773861  844748 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0408 18:44:33.773885  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0408 18:44:33.805964  844748 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-038955" context rescaled to 1 replicas
	I0408 18:44:33.822035  844748 pod_ready.go:97] error getting pod "coredns-76f75df574-crbt7" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-crbt7" not found
	I0408 18:44:33.822107  844748 pod_ready.go:81] duration metric: took 503.781921ms for pod "coredns-76f75df574-crbt7" in "kube-system" namespace to be "Ready" ...
	E0408 18:44:33.822134  844748 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-crbt7" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-crbt7" not found
	I0408 18:44:33.822154  844748 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-dj6cf" in "kube-system" namespace to be "Ready" ...
	I0408 18:44:33.943712  844748 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 18:44:33.943738  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0408 18:44:33.972433  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 18:44:35.521463  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.728763561s)
	I0408 18:44:35.521563  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.651745964s)
	I0408 18:44:35.521622  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.615073417s)
	I0408 18:44:35.521665  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.477128975s)
	I0408 18:44:35.828639  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:44:37.829731  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:44:37.887228  844748 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0408 18:44:37.887337  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:37.917129  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:38.266819  844748 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0408 18:44:38.341190  844748 addons.go:234] Setting addon gcp-auth=true in "addons-038955"
	I0408 18:44:38.341282  844748 host.go:66] Checking if "addons-038955" exists ...
	I0408 18:44:38.341762  844748 cli_runner.go:164] Run: docker container inspect addons-038955 --format={{.State.Status}}
	I0408 18:44:38.365309  844748 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0408 18:44:38.365359  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-038955
	I0408 18:44:38.386518  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/addons-038955/id_rsa Username:docker}
	I0408 18:44:38.389193  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.295218142s)
	I0408 18:44:38.389229  844748 addons.go:470] Verifying addon ingress=true in "addons-038955"
	I0408 18:44:38.391022  844748 out.go:177] * Verifying ingress addon...
	I0408 18:44:38.389403  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.262158052s)
	I0408 18:44:38.389443  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.139016945s)
	I0408 18:44:38.389466  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.77376706s)
	I0408 18:44:38.389494  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.742557486s)
	I0408 18:44:38.389540  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.61044652s)
	I0408 18:44:38.389623  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.016380229s)
	I0408 18:44:38.389667  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.711231388s)
	I0408 18:44:38.393895  844748 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0408 18:44:38.394201  844748 addons.go:470] Verifying addon registry=true in "addons-038955"
	I0408 18:44:38.396321  844748 out.go:177] * Verifying registry addon...
	I0408 18:44:38.394339  844748 addons.go:470] Verifying addon metrics-server=true in "addons-038955"
	W0408 18:44:38.394390  844748 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 18:44:38.399425  844748 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0408 18:44:38.401567  844748 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0408 18:44:38.401623  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:38.401637  844748 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-038955 service yakd-dashboard -n yakd-dashboard
	
	I0408 18:44:38.401727  844748 retry.go:31] will retry after 314.889564ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 18:44:38.409800  844748 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0408 18:44:38.409869  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:38.718692  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 18:44:38.901184  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:38.909933  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:39.379736  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.407256952s)
	I0408 18:44:39.379810  844748 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-038955"
	I0408 18:44:39.381988  844748 out.go:177] * Verifying csi-hostpath-driver addon...
	I0408 18:44:39.380005  844748 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.014674378s)
	I0408 18:44:39.385276  844748 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0408 18:44:39.387753  844748 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 18:44:39.389925  844748 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0408 18:44:39.391752  844748 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0408 18:44:39.391773  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0408 18:44:39.400030  844748 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0408 18:44:39.400053  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:39.418602  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:39.419169  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:39.465511  844748 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0408 18:44:39.465578  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0408 18:44:39.493315  844748 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 18:44:39.493381  844748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0408 18:44:39.517926  844748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 18:44:39.830369  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:44:39.891739  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:39.899270  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:39.906916  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:40.391456  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:40.398211  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:40.406191  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:40.487974  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.769231266s)
	I0408 18:44:40.776742  844748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.258734971s)
	I0408 18:44:40.781070  844748 addons.go:470] Verifying addon gcp-auth=true in "addons-038955"
	I0408 18:44:40.783231  844748 out.go:177] * Verifying gcp-auth addon...
	I0408 18:44:40.786208  844748 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0408 18:44:40.790176  844748 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0408 18:44:40.790243  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:40.893492  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:40.899741  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:40.906321  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:41.290964  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:41.393900  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:41.406198  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:41.409709  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:41.790536  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:41.892057  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:41.899344  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:41.907237  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:42.290748  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:42.329644  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:44:42.392393  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:42.399115  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:42.408166  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:42.790204  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:42.915517  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:42.916149  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:42.917008  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:43.290163  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:43.391268  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:43.398824  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:43.406306  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:43.790063  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:43.894691  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:43.916862  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:43.919217  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:44.289927  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:44.393284  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:44.400978  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:44.406441  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:44.790732  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:44.829526  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:44:44.891015  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:44.898279  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:44.906643  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:45.291410  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:45.392093  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:45.398070  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:45.406726  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:45.791036  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:45.892035  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:45.898480  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:45.907645  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:46.290678  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:46.390947  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:46.398181  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:46.406438  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:46.797336  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:46.835161  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:44:46.895258  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:46.925815  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:46.926462  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:47.290264  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:47.392337  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:47.398342  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:47.407074  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:47.790257  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:47.891076  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:47.898367  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:47.906878  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:48.290662  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:48.390655  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:48.398352  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:48.406689  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:48.791435  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:48.891257  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:48.899666  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:48.907258  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:49.292525  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:49.332139  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:44:49.391446  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:49.398688  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:49.406977  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:49.791936  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:49.891430  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:49.898476  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:49.907033  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:50.289655  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:50.391439  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:50.398819  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:50.406574  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:50.796288  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:50.891975  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:50.898894  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:50.908629  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:51.290508  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:51.390703  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:51.398917  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:51.407104  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:51.793488  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:51.833692  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:44:51.891518  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:51.898431  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:51.907269  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:52.290417  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:52.391259  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:52.398435  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:52.407026  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:52.792756  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:52.896428  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:52.902384  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:52.906761  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:53.290785  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:53.391104  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:53.398972  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:53.406439  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:53.789547  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:53.891565  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:53.898565  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:53.907543  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:54.290707  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:54.328990  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:44:54.391347  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:54.398051  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:54.406734  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:54.790813  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:54.890759  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:54.899385  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:54.907099  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:55.290521  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:55.390921  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:55.399032  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:55.406698  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:55.790283  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:55.891844  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:55.899221  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:55.907135  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:56.290198  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:56.391361  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:56.398469  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:56.407402  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:56.789762  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:56.829310  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:44:56.890892  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:56.899268  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:56.907250  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:57.290359  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:57.391809  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:57.397981  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:57.406604  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:57.789971  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:57.893857  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:57.898725  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:57.906772  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:58.289812  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:58.390623  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:58.399338  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:58.406769  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:58.790455  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:58.891526  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:58.899042  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:58.908735  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:59.290555  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:59.328532  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:44:59.391037  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:59.398968  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:59.406302  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:44:59.790152  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:44:59.891510  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:44:59.898064  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:44:59.906722  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:00.315296  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:00.430052  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:00.430228  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:00.430909  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:00.791626  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:00.892132  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:00.898657  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:00.907688  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:01.290056  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:01.329990  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:45:01.392558  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:01.399535  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:01.407673  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:01.791150  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:01.892930  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:01.899558  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:01.910068  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:02.290155  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:02.391320  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:02.398466  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:02.407214  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:02.790608  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:02.892046  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:02.903692  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:02.908071  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:03.290382  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:03.391551  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:03.398686  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:03.406325  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:03.789824  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:03.835237  844748 pod_ready.go:102] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"False"
	I0408 18:45:03.891677  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:03.899556  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:03.908280  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:04.290045  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:04.391583  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:04.399765  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:04.408510  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:04.789648  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:04.891457  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:04.898589  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:04.906425  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:05.289850  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:05.328906  844748 pod_ready.go:92] pod "coredns-76f75df574-dj6cf" in "kube-system" namespace has status "Ready":"True"
	I0408 18:45:05.328927  844748 pod_ready.go:81] duration metric: took 31.506753176s for pod "coredns-76f75df574-dj6cf" in "kube-system" namespace to be "Ready" ...
	I0408 18:45:05.328940  844748 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-038955" in "kube-system" namespace to be "Ready" ...
	I0408 18:45:05.334204  844748 pod_ready.go:92] pod "etcd-addons-038955" in "kube-system" namespace has status "Ready":"True"
	I0408 18:45:05.334226  844748 pod_ready.go:81] duration metric: took 5.247093ms for pod "etcd-addons-038955" in "kube-system" namespace to be "Ready" ...
	I0408 18:45:05.334240  844748 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-038955" in "kube-system" namespace to be "Ready" ...
	I0408 18:45:05.339631  844748 pod_ready.go:92] pod "kube-apiserver-addons-038955" in "kube-system" namespace has status "Ready":"True"
	I0408 18:45:05.339658  844748 pod_ready.go:81] duration metric: took 5.407352ms for pod "kube-apiserver-addons-038955" in "kube-system" namespace to be "Ready" ...
	I0408 18:45:05.339671  844748 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-038955" in "kube-system" namespace to be "Ready" ...
	I0408 18:45:05.345504  844748 pod_ready.go:92] pod "kube-controller-manager-addons-038955" in "kube-system" namespace has status "Ready":"True"
	I0408 18:45:05.345528  844748 pod_ready.go:81] duration metric: took 5.816876ms for pod "kube-controller-manager-addons-038955" in "kube-system" namespace to be "Ready" ...
	I0408 18:45:05.345540  844748 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-287hv" in "kube-system" namespace to be "Ready" ...
	I0408 18:45:05.352093  844748 pod_ready.go:92] pod "kube-proxy-287hv" in "kube-system" namespace has status "Ready":"True"
	I0408 18:45:05.352115  844748 pod_ready.go:81] duration metric: took 6.5672ms for pod "kube-proxy-287hv" in "kube-system" namespace to be "Ready" ...
	I0408 18:45:05.352127  844748 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-038955" in "kube-system" namespace to be "Ready" ...
	I0408 18:45:05.391964  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:05.398909  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:05.406501  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:05.735108  844748 pod_ready.go:92] pod "kube-scheduler-addons-038955" in "kube-system" namespace has status "Ready":"True"
	I0408 18:45:05.735185  844748 pod_ready.go:81] duration metric: took 383.046793ms for pod "kube-scheduler-addons-038955" in "kube-system" namespace to be "Ready" ...
	I0408 18:45:05.735210  844748 pod_ready.go:38] duration metric: took 32.429923736s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 18:45:05.735249  844748 api_server.go:52] waiting for apiserver process to appear ...
	I0408 18:45:05.735345  844748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 18:45:05.754423  844748 api_server.go:72] duration metric: took 35.104897597s to wait for apiserver process to appear ...
	I0408 18:45:05.754501  844748 api_server.go:88] waiting for apiserver healthz status ...
	I0408 18:45:05.754547  844748 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0408 18:45:05.763927  844748 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0408 18:45:05.765564  844748 api_server.go:141] control plane version: v1.29.3
	I0408 18:45:05.765638  844748 api_server.go:131] duration metric: took 11.105027ms to wait for apiserver health ...
	I0408 18:45:05.765661  844748 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 18:45:05.791757  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:05.893437  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:05.905149  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:05.910542  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:05.947326  844748 system_pods.go:59] 18 kube-system pods found
	I0408 18:45:05.947411  844748 system_pods.go:61] "coredns-76f75df574-dj6cf" [b39315ac-75c4-4fad-855a-4bc0b17e2e3d] Running
	I0408 18:45:05.947442  844748 system_pods.go:61] "csi-hostpath-attacher-0" [162a6ba0-781d-406c-a00b-5455db26413c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 18:45:05.947479  844748 system_pods.go:61] "csi-hostpath-resizer-0" [98870f74-f035-4067-951f-d7eff48a9e7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 18:45:05.947516  844748 system_pods.go:61] "csi-hostpathplugin-2nj46" [e9aafeb0-3be6-4d86-8623-230007d81a6f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 18:45:05.947538  844748 system_pods.go:61] "etcd-addons-038955" [0e7b492a-52db-49e1-bb2d-45a8cf2bfa11] Running
	I0408 18:45:05.947554  844748 system_pods.go:61] "kindnet-pcsbd" [62a3f65d-f88a-467f-a924-aa775c9e9aa9] Running
	I0408 18:45:05.947577  844748 system_pods.go:61] "kube-apiserver-addons-038955" [b7e92095-288a-45a3-9711-45a24b9ac27a] Running
	I0408 18:45:05.947595  844748 system_pods.go:61] "kube-controller-manager-addons-038955" [a53dab82-a8a5-408d-8ed7-7eb6cc837c0f] Running
	I0408 18:45:05.947615  844748 system_pods.go:61] "kube-ingress-dns-minikube" [cc69f6d4-fec0-4693-8a72-7dd0c54c4001] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0408 18:45:05.947633  844748 system_pods.go:61] "kube-proxy-287hv" [b431e0c9-025f-4bc0-93ec-18e0793464c6] Running
	I0408 18:45:05.947656  844748 system_pods.go:61] "kube-scheduler-addons-038955" [d0d1b5d2-3392-4046-ad96-e53315b2dc7b] Running
	I0408 18:45:05.947676  844748 system_pods.go:61] "metrics-server-75d6c48ddd-6htmh" [d16800d3-aec2-4631-8d52-da9f53dd8819] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 18:45:05.947697  844748 system_pods.go:61] "nvidia-device-plugin-daemonset-mhg4z" [6c0fc97e-3cef-4840-9b64-155d69d9d548] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0408 18:45:05.947717  844748 system_pods.go:61] "registry-cdb5h" [f28c85f3-d85e-49b0-a5ac-ca0f5b10cbaf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0408 18:45:05.947744  844748 system_pods.go:61] "registry-proxy-v6nbd" [620468ea-2130-4402-bd89-371f755f849b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0408 18:45:05.947764  844748 system_pods.go:61] "snapshot-controller-58dbcc7b99-blhbc" [2abb977c-12af-4f5b-8976-4fb8fc5949f6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:45:05.947784  844748 system_pods.go:61] "snapshot-controller-58dbcc7b99-nwg6w" [ce190883-777f-45c3-97e7-301bf1c39874] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:45:05.947810  844748 system_pods.go:61] "storage-provisioner" [b4497b8b-2ae1-42d0-b99f-e61f616ac0db] Running
	I0408 18:45:05.947830  844748 system_pods.go:74] duration metric: took 182.151111ms to wait for pod list to return data ...
	I0408 18:45:05.947852  844748 default_sa.go:34] waiting for default service account to be created ...
	I0408 18:45:06.126040  844748 default_sa.go:45] found service account: "default"
	I0408 18:45:06.126110  844748 default_sa.go:55] duration metric: took 178.235908ms for default service account to be created ...
	I0408 18:45:06.126135  844748 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 18:45:06.292538  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:06.336821  844748 system_pods.go:86] 18 kube-system pods found
	I0408 18:45:06.336862  844748 system_pods.go:89] "coredns-76f75df574-dj6cf" [b39315ac-75c4-4fad-855a-4bc0b17e2e3d] Running
	I0408 18:45:06.336872  844748 system_pods.go:89] "csi-hostpath-attacher-0" [162a6ba0-781d-406c-a00b-5455db26413c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 18:45:06.337133  844748 system_pods.go:89] "csi-hostpath-resizer-0" [98870f74-f035-4067-951f-d7eff48a9e7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 18:45:06.337155  844748 system_pods.go:89] "csi-hostpathplugin-2nj46" [e9aafeb0-3be6-4d86-8623-230007d81a6f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 18:45:06.337161  844748 system_pods.go:89] "etcd-addons-038955" [0e7b492a-52db-49e1-bb2d-45a8cf2bfa11] Running
	I0408 18:45:06.337179  844748 system_pods.go:89] "kindnet-pcsbd" [62a3f65d-f88a-467f-a924-aa775c9e9aa9] Running
	I0408 18:45:06.337191  844748 system_pods.go:89] "kube-apiserver-addons-038955" [b7e92095-288a-45a3-9711-45a24b9ac27a] Running
	I0408 18:45:06.337197  844748 system_pods.go:89] "kube-controller-manager-addons-038955" [a53dab82-a8a5-408d-8ed7-7eb6cc837c0f] Running
	I0408 18:45:06.337211  844748 system_pods.go:89] "kube-ingress-dns-minikube" [cc69f6d4-fec0-4693-8a72-7dd0c54c4001] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0408 18:45:06.337217  844748 system_pods.go:89] "kube-proxy-287hv" [b431e0c9-025f-4bc0-93ec-18e0793464c6] Running
	I0408 18:45:06.337226  844748 system_pods.go:89] "kube-scheduler-addons-038955" [d0d1b5d2-3392-4046-ad96-e53315b2dc7b] Running
	I0408 18:45:06.337232  844748 system_pods.go:89] "metrics-server-75d6c48ddd-6htmh" [d16800d3-aec2-4631-8d52-da9f53dd8819] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 18:45:06.337254  844748 system_pods.go:89] "nvidia-device-plugin-daemonset-mhg4z" [6c0fc97e-3cef-4840-9b64-155d69d9d548] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0408 18:45:06.337268  844748 system_pods.go:89] "registry-cdb5h" [f28c85f3-d85e-49b0-a5ac-ca0f5b10cbaf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0408 18:45:06.337283  844748 system_pods.go:89] "registry-proxy-v6nbd" [620468ea-2130-4402-bd89-371f755f849b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0408 18:45:06.337295  844748 system_pods.go:89] "snapshot-controller-58dbcc7b99-blhbc" [2abb977c-12af-4f5b-8976-4fb8fc5949f6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:45:06.337305  844748 system_pods.go:89] "snapshot-controller-58dbcc7b99-nwg6w" [ce190883-777f-45c3-97e7-301bf1c39874] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:45:06.337314  844748 system_pods.go:89] "storage-provisioner" [b4497b8b-2ae1-42d0-b99f-e61f616ac0db] Running
	I0408 18:45:06.337323  844748 system_pods.go:126] duration metric: took 211.170656ms to wait for k8s-apps to be running ...
	I0408 18:45:06.337335  844748 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 18:45:06.337417  844748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:45:06.358477  844748 system_svc.go:56] duration metric: took 21.13195ms WaitForService to wait for kubelet
	I0408 18:45:06.358510  844748 kubeadm.go:576] duration metric: took 35.708987954s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 18:45:06.358532  844748 node_conditions.go:102] verifying NodePressure condition ...
	I0408 18:45:06.391807  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:06.399971  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:06.411875  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:06.527160  844748 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0408 18:45:06.527196  844748 node_conditions.go:123] node cpu capacity is 2
	I0408 18:45:06.527211  844748 node_conditions.go:105] duration metric: took 168.644666ms to run NodePressure ...
	I0408 18:45:06.527226  844748 start.go:240] waiting for startup goroutines ...
	I0408 18:45:06.789613  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:06.890909  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:06.898480  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:06.907961  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:07.289727  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:07.391802  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:07.398173  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:07.406675  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:07.790829  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:07.891712  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:07.899037  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:07.906906  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:08.290479  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:08.391887  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:08.398691  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:08.411085  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:08.789956  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:08.891161  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:08.898584  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:08.907726  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:09.289816  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:09.391575  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:09.398606  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:09.407367  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:09.790793  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:09.921939  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:09.922183  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:09.922951  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:10.298691  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:10.392436  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:10.398780  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:10.407353  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:10.789853  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:10.892020  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:10.898354  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:10.907179  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:11.289754  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:11.391871  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:11.397915  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:11.407303  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:11.790635  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:11.896190  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:11.898750  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:11.906555  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:12.290764  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:12.391885  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:12.399928  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:12.407556  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:12.790320  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:12.891445  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:12.898312  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:12.906859  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:13.289784  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:13.391705  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:13.398895  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:13.406827  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:13.790165  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:13.897229  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:13.904421  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:13.911277  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:14.290071  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:14.390802  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:14.398546  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:14.406898  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:14.790552  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:14.891850  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:14.898744  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:14.906422  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:15.290678  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:15.391385  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:15.398285  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:15.406840  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:15.790137  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:15.891433  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:15.899742  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:15.906979  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:16.290701  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:16.393594  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:16.400477  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:16.407079  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:16.789794  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:16.892218  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:16.898552  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:16.907320  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:17.291095  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:17.392169  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:17.398543  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:17.410518  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:17.791992  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:17.891517  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:17.899492  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:17.907348  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:18.289751  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:18.395215  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:18.399006  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:18.406492  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:18.791023  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:18.891100  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:18.898047  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:18.907357  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:19.290416  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:19.394116  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:19.398271  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:19.407007  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:19.790223  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:19.891555  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:19.898267  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:19.907171  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:20.290793  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:20.394498  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:20.398844  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:20.409883  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:20.789612  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:20.891880  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:20.899398  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:20.907789  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:21.297113  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:21.395961  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:21.400298  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:21.407842  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:21.790228  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:21.890745  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:21.898891  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:21.906309  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:22.291029  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:22.391723  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:22.398941  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:22.406902  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:22.791868  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:22.892866  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:22.898602  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:22.908923  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:23.290484  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:23.391219  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:23.403268  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:23.406433  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:23.790713  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:23.891865  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:23.898609  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:23.906968  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:24.290106  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:24.392616  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:24.400024  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:24.406774  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:24.790672  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:24.895197  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:24.901567  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:24.908427  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:25.292790  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:25.393833  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:25.400038  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:25.408430  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:25.791188  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:25.894832  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:25.899287  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:25.918023  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:26.290432  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:26.401332  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:26.419083  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:26.422441  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:26.791581  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:26.904106  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:26.907393  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:26.911874  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:27.291147  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:27.392448  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:27.400307  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:27.407984  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:27.791120  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:27.893614  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:27.898890  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:27.907012  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:28.290375  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:28.400499  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:28.402358  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:28.408217  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:28.790411  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:28.892011  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:28.898657  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:28.906667  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:29.290661  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:29.391238  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:29.399548  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:29.407015  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:29.790134  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:29.891510  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:29.899042  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:29.906245  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:30.290619  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:30.392034  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:30.406694  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:30.409335  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:30.789855  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:30.896957  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:30.899230  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:30.906678  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:45:31.289766  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:31.392015  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:31.399640  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:31.407181  844748 kapi.go:107] duration metric: took 53.007751409s to wait for kubernetes.io/minikube-addons=registry ...
	I0408 18:45:31.793093  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:31.891530  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:31.898783  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:32.291007  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:32.391229  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:32.399297  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:32.791251  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:32.890863  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:32.899576  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:33.290615  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:33.391534  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:33.398339  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:33.790574  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:33.892655  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:33.906943  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:34.290547  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:34.392706  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:34.398749  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:34.790662  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:34.891160  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:34.898333  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:35.291447  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:35.391893  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:35.402539  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:35.789778  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:35.891230  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:35.898142  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:36.290294  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:36.391394  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:36.398133  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:36.789967  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:36.892316  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:45:36.898354  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:37.291297  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:37.392874  844748 kapi.go:107] duration metric: took 58.007593516s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0408 18:45:37.398543  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:37.790394  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:37.899102  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:38.289988  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:38.398403  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:38.797321  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:38.898757  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:39.290470  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:39.398424  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:39.790377  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:39.898890  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:40.291401  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:40.397965  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:40.789731  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:40.898901  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:41.290068  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:41.398455  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:41.789971  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:41.898904  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:42.291181  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:42.399158  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:42.790399  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:42.900967  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:43.290636  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:43.404475  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:43.790537  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:43.898971  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:44.290744  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:44.401106  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:44.790255  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:44.899320  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:45.290761  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:45.398792  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:45.790911  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:45.898992  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:46.289943  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:46.398901  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:46.790594  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:46.900697  844748 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:45:47.290983  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:47.400360  844748 kapi.go:107] duration metric: took 1m9.006461256s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0408 18:45:47.791149  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:48.290674  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:48.790745  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:49.291233  844748 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:45:49.790141  844748 kapi.go:107] duration metric: took 1m9.00393078s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0408 18:45:49.792134  844748 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-038955 cluster.
	I0408 18:45:49.793943  844748 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0408 18:45:49.795608  844748 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0408 18:45:49.797611  844748 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, default-storageclass, inspektor-gadget, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0408 18:45:49.799295  844748 addons.go:505] duration metric: took 1m19.149484799s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin default-storageclass inspektor-gadget cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0408 18:45:49.799374  844748 start.go:245] waiting for cluster config update ...
	I0408 18:45:49.799400  844748 start.go:254] writing updated cluster config ...
	I0408 18:45:49.799703  844748 ssh_runner.go:195] Run: rm -f paused
	I0408 18:45:50.159197  844748 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 18:45:50.161612  844748 out.go:177] * Done! kubectl is now configured to use "addons-038955" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	e5b4135610beb       dd1b12fcb6097       9 seconds ago        Exited              hello-world-app                          2                   2fc8197dfc7c9       hello-world-app-5d77478584-d4k9f
	c04818464a223       b8c82647e8a25       34 seconds ago       Running             nginx                                    0                   37dc99701fccf       nginx
	3cc8510fcfaca       6ef582f3ec844       About a minute ago   Running             gcp-auth                                 0                   b18c5c80e6a92       gcp-auth-7d69788767-n6tt9
	bd9e2ba366305       6505abd14fdf8       About a minute ago   Exited              controller                               0                   7d6c38f0ee61b       ingress-nginx-controller-65496f9567-cghkx
	cb2328991c2a2       ee6d597e62dc8       About a minute ago   Running             csi-snapshotter                          0                   7842b302cd502       csi-hostpathplugin-2nj46
	22490f95b35b9       642ded511e141       About a minute ago   Running             csi-provisioner                          0                   7842b302cd502       csi-hostpathplugin-2nj46
	3d347463d4028       922312104da8a       About a minute ago   Running             liveness-probe                           0                   7842b302cd502       csi-hostpathplugin-2nj46
	6ee064138f82b       08f6b2990811a       About a minute ago   Running             hostpath                                 0                   7842b302cd502       csi-hostpathplugin-2nj46
	c321655d3f3dd       0107d56dbc0be       About a minute ago   Running             node-driver-registrar                    0                   7842b302cd502       csi-hostpathplugin-2nj46
	663d8304c9697       6727f8bc3105d       About a minute ago   Running             cloud-spanner-emulator                   0                   c8fb16bdcc39f       cloud-spanner-emulator-5446596998-xkxz6
	bf92736d107f9       1461903ec4fe9       About a minute ago   Running             csi-external-health-monitor-controller   0                   7842b302cd502       csi-hostpathplugin-2nj46
	76c4b46478316       487fa743e1e22       About a minute ago   Running             csi-resizer                              0                   8aa50067192e4       csi-hostpath-resizer-0
	e5dbb1021385b       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller               0                   b87af24c94d4f       snapshot-controller-58dbcc7b99-blhbc
	666789272e9f6       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller               0                   963c21acaee84       snapshot-controller-58dbcc7b99-nwg6w
	cb6ae8dce7554       9a80d518f102c       About a minute ago   Running             csi-attacher                             0                   97edcbd870b76       csi-hostpath-attacher-0
	28885306e52a6       1a024e390dd05       About a minute ago   Exited              patch                                    0                   6572436677a8c       ingress-nginx-admission-patch-9m8bw
	840a625623a19       1a024e390dd05       About a minute ago   Exited              create                                   0                   7ef688df71647       ingress-nginx-admission-create-hdxfp
	c233c814c5927       c0cfb4ce73bda       About a minute ago   Running             nvidia-device-plugin-ctr                 0                   7cbada7202f1f       nvidia-device-plugin-daemonset-mhg4z
	44262cdaa6027       20e3f2db01e81       About a minute ago   Running             yakd                                     0                   9e0f6f8db9fca       yakd-dashboard-9947fc6bf-djbbj
	f2f8bdc5e2697       7ce2150c8929b       About a minute ago   Running             local-path-provisioner                   0                   4b3d10d7a189a       local-path-provisioner-78b46b4d5c-dzc7w
	19d9aa6aee69f       2437cf7621777       About a minute ago   Running             coredns                                  0                   7a1f0149fccfb       coredns-76f75df574-dj6cf
	d2ab2c46c443e       ba04bb24b9575       2 minutes ago        Running             storage-provisioner                      0                   66b7029e036ba       storage-provisioner
	70fff30f22178       0e9b4a0d1e86d       2 minutes ago        Running             kube-proxy                               0                   385fdff786a96       kube-proxy-287hv
	ec276f5ec73d5       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                              0                   df2786fd4e564       kindnet-pcsbd
	89d63ef61321d       2581114f5709d       2 minutes ago        Running             kube-apiserver                           0                   76ec41d838a6e       kube-apiserver-addons-038955
	f20482e874ffb       4b51f9f6bc9b9       2 minutes ago        Running             kube-scheduler                           0                   4393fd04a1c29       kube-scheduler-addons-038955
	2afd5d2ce9845       121d70d9a3805       2 minutes ago        Running             kube-controller-manager                  0                   1467e7f6a0265       kube-controller-manager-addons-038955
	61b1c138a090e       014faa467e297       2 minutes ago        Running             etcd                                     0                   f8ce170f2519a       etcd-addons-038955
	
	
	==> containerd <==
	Apr 08 18:46:50 addons-038955 containerd[769]: time="2024-04-08T18:46:50.550233878Z" level=info msg="StartContainer for \"e5b4135610beb35d302b6f59eb8013a866cbbd9702df7bf8e2eba68ef96c687d\" returns successfully"
	Apr 08 18:46:50 addons-038955 containerd[769]: time="2024-04-08T18:46:50.570559813Z" level=info msg="RemoveContainer for \"536eba218c61bc954a50c73e38649adef0bd98ee11a6aed225eb7ce83441e6f3\""
	Apr 08 18:46:50 addons-038955 containerd[769]: time="2024-04-08T18:46:50.579748929Z" level=info msg="RemoveContainer for \"536eba218c61bc954a50c73e38649adef0bd98ee11a6aed225eb7ce83441e6f3\" returns successfully"
	Apr 08 18:46:50 addons-038955 containerd[769]: time="2024-04-08T18:46:50.585798824Z" level=info msg="shim disconnected" id=e5b4135610beb35d302b6f59eb8013a866cbbd9702df7bf8e2eba68ef96c687d
	Apr 08 18:46:50 addons-038955 containerd[769]: time="2024-04-08T18:46:50.585867286Z" level=warning msg="cleaning up after shim disconnected" id=e5b4135610beb35d302b6f59eb8013a866cbbd9702df7bf8e2eba68ef96c687d namespace=k8s.io
	Apr 08 18:46:50 addons-038955 containerd[769]: time="2024-04-08T18:46:50.585878674Z" level=info msg="cleaning up dead shim"
	Apr 08 18:46:50 addons-038955 containerd[769]: time="2024-04-08T18:46:50.600731611Z" level=warning msg="cleanup warnings time=\"2024-04-08T18:46:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9167 runtime=io.containerd.runc.v2\n"
	Apr 08 18:46:51 addons-038955 containerd[769]: time="2024-04-08T18:46:51.583021882Z" level=info msg="RemoveContainer for \"b086e3882b77194f9ede1634d1894ac4f1628cae7db39873999f0fa49d0f2983\""
	Apr 08 18:46:51 addons-038955 containerd[769]: time="2024-04-08T18:46:51.591796830Z" level=info msg="RemoveContainer for \"b086e3882b77194f9ede1634d1894ac4f1628cae7db39873999f0fa49d0f2983\" returns successfully"
	Apr 08 18:46:52 addons-038955 containerd[769]: time="2024-04-08T18:46:52.389890824Z" level=info msg="StopContainer for \"bd9e2ba3663051c8dfe7d5f20af5af3f60f7b0b9c0d205b117089372d1b6b7b9\" with timeout 2 (s)"
	Apr 08 18:46:52 addons-038955 containerd[769]: time="2024-04-08T18:46:52.390445100Z" level=info msg="Stop container \"bd9e2ba3663051c8dfe7d5f20af5af3f60f7b0b9c0d205b117089372d1b6b7b9\" with signal terminated"
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.397387739Z" level=info msg="Kill container \"bd9e2ba3663051c8dfe7d5f20af5af3f60f7b0b9c0d205b117089372d1b6b7b9\""
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.464977025Z" level=info msg="shim disconnected" id=bd9e2ba3663051c8dfe7d5f20af5af3f60f7b0b9c0d205b117089372d1b6b7b9
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.465042795Z" level=warning msg="cleaning up after shim disconnected" id=bd9e2ba3663051c8dfe7d5f20af5af3f60f7b0b9c0d205b117089372d1b6b7b9 namespace=k8s.io
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.465055095Z" level=info msg="cleaning up dead shim"
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.473268169Z" level=warning msg="cleanup warnings time=\"2024-04-08T18:46:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9260 runtime=io.containerd.runc.v2\n"
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.476462298Z" level=info msg="StopContainer for \"bd9e2ba3663051c8dfe7d5f20af5af3f60f7b0b9c0d205b117089372d1b6b7b9\" returns successfully"
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.482635907Z" level=info msg="StopPodSandbox for \"7d6c38f0ee61b38f5b3aeced3d0b589571adf309d971d6d56474664442aeb98f\""
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.482881628Z" level=info msg="Container to stop \"bd9e2ba3663051c8dfe7d5f20af5af3f60f7b0b9c0d205b117089372d1b6b7b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.523980150Z" level=info msg="shim disconnected" id=7d6c38f0ee61b38f5b3aeced3d0b589571adf309d971d6d56474664442aeb98f
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.524268832Z" level=warning msg="cleaning up after shim disconnected" id=7d6c38f0ee61b38f5b3aeced3d0b589571adf309d971d6d56474664442aeb98f namespace=k8s.io
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.524296039Z" level=info msg="cleaning up dead shim"
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.534247894Z" level=warning msg="cleanup warnings time=\"2024-04-08T18:46:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9294 runtime=io.containerd.runc.v2\n"
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.598615031Z" level=info msg="TearDown network for sandbox \"7d6c38f0ee61b38f5b3aeced3d0b589571adf309d971d6d56474664442aeb98f\" successfully"
	Apr 08 18:46:54 addons-038955 containerd[769]: time="2024-04-08T18:46:54.598731286Z" level=info msg="StopPodSandbox for \"7d6c38f0ee61b38f5b3aeced3d0b589571adf309d971d6d56474664442aeb98f\" returns successfully"
	
	
	==> coredns [19d9aa6aee69f05fd74bcfa995c55d9d88f0594c873730c74014e59f3683f956] <==
	[INFO] 10.244.0.19:55453 - 25156 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000660415s
	[INFO] 10.244.0.19:55453 - 23818 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000166912s
	[INFO] 10.244.0.19:55453 - 27231 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002349374s
	[INFO] 10.244.0.19:59715 - 59061 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.009101381s
	[INFO] 10.244.0.19:59715 - 43639 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00013424s
	[INFO] 10.244.0.19:55453 - 17282 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002942386s
	[INFO] 10.244.0.19:55453 - 58080 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000096662s
	[INFO] 10.244.0.19:34435 - 28597 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000140878s
	[INFO] 10.244.0.19:43358 - 17700 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006573s
	[INFO] 10.244.0.19:34435 - 34996 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076437s
	[INFO] 10.244.0.19:34435 - 25251 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054357s
	[INFO] 10.244.0.19:43358 - 1441 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000122294s
	[INFO] 10.244.0.19:34435 - 21992 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078422s
	[INFO] 10.244.0.19:43358 - 57760 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005133s
	[INFO] 10.244.0.19:34435 - 13543 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083288s
	[INFO] 10.244.0.19:43358 - 14996 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069488s
	[INFO] 10.244.0.19:34435 - 48441 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000144259s
	[INFO] 10.244.0.19:43358 - 1947 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057911s
	[INFO] 10.244.0.19:43358 - 24567 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000186777s
	[INFO] 10.244.0.19:34435 - 50327 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001695762s
	[INFO] 10.244.0.19:43358 - 31742 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001473466s
	[INFO] 10.244.0.19:34435 - 27862 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001313904s
	[INFO] 10.244.0.19:43358 - 25852 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001417754s
	[INFO] 10.244.0.19:43358 - 33419 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067551s
	[INFO] 10.244.0.19:34435 - 28801 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00027027s
	
	
	==> describe nodes <==
	Name:               addons-038955
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-038955
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021
	                    minikube.k8s.io/name=addons-038955
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T18_44_18_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-038955
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-038955"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 18:44:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-038955
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 18:46:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 18:46:51 +0000   Mon, 08 Apr 2024 18:44:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 18:46:51 +0000   Mon, 08 Apr 2024 18:44:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 18:46:51 +0000   Mon, 08 Apr 2024 18:44:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 18:46:51 +0000   Mon, 08 Apr 2024 18:44:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-038955
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 02b668fc42b84666969ab9a8f1a341e1
	  System UUID:                4c4002ed-e6d8-4b0b-932b-87a837fb4647
	  Boot ID:                    b4b2abab-4517-475f-9e8e-63d816803507
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-xkxz6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  default                     hello-world-app-5d77478584-d4k9f           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-7d69788767-n6tt9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 coredns-76f75df574-dj6cf                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m28s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 csi-hostpathplugin-2nj46                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 etcd-addons-038955                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m41s
	  kube-system                 kindnet-pcsbd                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m28s
	  kube-system                 kube-apiserver-addons-038955               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 kube-controller-manager-addons-038955      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 kube-proxy-287hv                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-scheduler-addons-038955               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 nvidia-device-plugin-daemonset-mhg4z       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  kube-system                 snapshot-controller-58dbcc7b99-blhbc       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 snapshot-controller-58dbcc7b99-nwg6w       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  local-path-storage          local-path-provisioner-78b46b4d5c-dzc7w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-djbbj             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m27s  kube-proxy       
	  Normal  Starting                 2m41s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m41s  kubelet          Node addons-038955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m41s  kubelet          Node addons-038955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s  kubelet          Node addons-038955 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m41s  kubelet          Node addons-038955 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m41s  kubelet          Node addons-038955 status is now: NodeReady
	  Normal  RegisteredNode           2m29s  node-controller  Node addons-038955 event: Registered Node addons-038955 in Controller
	
	
	==> dmesg <==
	[  +0.004087] FS-Cache: Duplicate cookie detected
	[  +0.000719] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001078] FS-Cache: O-cookie d=000000002ce2b4d1{9p.inode} n=0000000021cb3a42
	[  +0.001034] FS-Cache: O-key=[8] '01d2c90000000000'
	[  +0.000689] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000912] FS-Cache: N-cookie d=000000002ce2b4d1{9p.inode} n=0000000099010f39
	[  +0.001063] FS-Cache: N-key=[8] '01d2c90000000000'
	[  +2.668908] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000987] FS-Cache: O-cookie d=000000002ce2b4d1{9p.inode} n=00000000c36a3df6
	[  +0.001121] FS-Cache: O-key=[8] '00d2c90000000000'
	[  +0.000723] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000919] FS-Cache: N-cookie d=000000002ce2b4d1{9p.inode} n=00000000aadf05af
	[  +0.001030] FS-Cache: N-key=[8] '00d2c90000000000'
	[  +0.402272] FS-Cache: Duplicate cookie detected
	[  +0.000736] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000927] FS-Cache: O-cookie d=000000002ce2b4d1{9p.inode} n=000000004aea7e04
	[  +0.001043] FS-Cache: O-key=[8] '06d2c90000000000'
	[  +0.000711] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=000000002ce2b4d1{9p.inode} n=00000000344a30e9
	[  +0.001030] FS-Cache: N-key=[8] '06d2c90000000000'
	[Apr 8 18:17] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.002163] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.006957] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.155193] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [61b1c138a090ea2af28592982ddc992f496dd50149afefe39380b92e36d89b5e] <==
	{"level":"info","ts":"2024-04-08T18:44:10.914099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-04-08T18:44:10.914177Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-04-08T18:44:10.919884Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-08T18:44:10.920071Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-08T18:44:10.920095Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-08T18:44:10.920168Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-08T18:44:10.920177Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-08T18:44:11.606033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-08T18:44:11.606154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-08T18:44:11.606215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-04-08T18:44:11.606266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-04-08T18:44:11.606293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-08T18:44:11.606341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-04-08T18:44:11.606372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-08T18:44:11.610133Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T18:44:11.614253Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-038955 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-08T18:44:11.614476Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T18:44:11.614821Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T18:44:11.615Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T18:44:11.615087Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T18:44:11.615194Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T18:44:11.615266Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T18:44:11.615364Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T18:44:11.617234Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-08T18:44:11.659572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [3cc8510fcfacace7d50f7efe9d226c0e17cd0f653f6402e6246bebd909144089] <==
	2024/04/08 18:45:48 GCP Auth Webhook started!
	2024/04/08 18:46:00 Ready to marshal response ...
	2024/04/08 18:46:00 Ready to write response ...
	2024/04/08 18:46:23 Ready to marshal response ...
	2024/04/08 18:46:23 Ready to write response ...
	2024/04/08 18:46:33 Ready to marshal response ...
	2024/04/08 18:46:33 Ready to write response ...
	2024/04/08 18:46:40 Ready to marshal response ...
	2024/04/08 18:46:40 Ready to write response ...
	
	
	==> kernel <==
	 18:47:00 up  3:29,  0 users,  load average: 1.39, 1.86, 2.60
	Linux addons-038955 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [ec276f5ec73d5e720709ad31b9a632fadc1bfe45956679c930f2642d1dcaf1bd] <==
	I0408 18:45:02.353925       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0408 18:45:02.369004       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:45:02.369038       1 main.go:227] handling current node
	I0408 18:45:12.381470       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:45:12.381499       1 main.go:227] handling current node
	I0408 18:45:22.395234       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:45:22.395260       1 main.go:227] handling current node
	I0408 18:45:32.399983       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:45:32.400011       1 main.go:227] handling current node
	I0408 18:45:42.409756       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:45:42.409791       1 main.go:227] handling current node
	I0408 18:45:52.413605       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:45:52.413987       1 main.go:227] handling current node
	I0408 18:46:02.425859       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:46:02.425885       1 main.go:227] handling current node
	I0408 18:46:12.438735       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:46:12.438772       1 main.go:227] handling current node
	I0408 18:46:22.451052       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:46:22.451078       1 main.go:227] handling current node
	I0408 18:46:32.455648       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:46:32.455678       1 main.go:227] handling current node
	I0408 18:46:42.459584       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:46:42.459615       1 main.go:227] handling current node
	I0408 18:46:52.471255       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0408 18:46:52.471288       1 main.go:227] handling current node
	
	
	==> kube-apiserver [89d63ef61321d7fb7e1491d69a40a148e6f87ca5ddf5d08117690e1469f4e47c] <==
	E0408 18:44:37.365275       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 18:44:37.366370       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0408 18:44:38.097257       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.107.238.83"}
	I0408 18:44:38.118869       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.106.190.251"}
	I0408 18:44:38.179091       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0408 18:44:39.179440       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.111.46.185"}
	I0408 18:44:39.196665       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0408 18:44:39.316140       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.104.234.97"}
	I0408 18:44:40.552774       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.99.223.220"}
	E0408 18:45:36.871977       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.80.68:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.80.68:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.80.68:443: connect: connection refused
	W0408 18:45:36.872115       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 18:45:36.872172       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0408 18:45:36.872882       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.80.68:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.80.68:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.80.68:443: connect: connection refused
	E0408 18:45:36.878576       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.80.68:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.80.68:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.80.68:443: connect: connection refused
	I0408 18:45:37.020506       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0408 18:46:03.457565       1 watch.go:253] http2: stream closed
	I0408 18:46:17.950165       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0408 18:46:18.983968       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0408 18:46:23.520126       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0408 18:46:23.901661       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.18.229"}
	I0408 18:46:33.733517       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.7.9"}
	I0408 18:46:37.375770       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0408 18:46:47.850452       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [2afd5d2ce9845b0954fcda53f2160c1df1741f0d3fc42abfe78d15fbed24c431] <==
	I0408 18:46:30.531386       1 shared_informer.go:318] Caches are synced for resource quota
	I0408 18:46:30.863357       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0408 18:46:30.863413       1 shared_informer.go:318] Caches are synced for garbage collector
	I0408 18:46:33.366804       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0408 18:46:33.390719       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-d4k9f"
	I0408 18:46:33.415550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.571595ms"
	I0408 18:46:33.435725       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.103221ms"
	I0408 18:46:33.457033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="21.252633ms"
	I0408 18:46:33.457334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.644µs"
	W0408 18:46:35.833602       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 18:46:35.833636       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0408 18:46:36.535589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="63.334µs"
	I0408 18:46:37.541953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.986µs"
	I0408 18:46:38.567565       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="68.651µs"
	I0408 18:46:40.516446       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0408 18:46:49.998412       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0408 18:46:50.008011       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0408 18:46:50.619688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.011044ms"
	I0408 18:46:50.619764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="31.736µs"
	I0408 18:46:51.363795       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0408 18:46:51.369154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="6.531µs"
	I0408 18:46:51.374068       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0408 18:46:51.605896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.536595ms"
	I0408 18:46:51.606232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="77.093µs"
	I0408 18:47:00.233890       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [70fff30f221786d064bd0e79ab3fbded398a1edd07812ec7cfeee07678273128] <==
	I0408 18:44:32.174724       1 server_others.go:72] "Using iptables proxy"
	I0408 18:44:32.201298       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0408 18:44:32.252206       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0408 18:44:32.252237       1 server_others.go:168] "Using iptables Proxier"
	I0408 18:44:32.254399       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0408 18:44:32.254425       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0408 18:44:32.254449       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 18:44:32.254653       1 server.go:865] "Version info" version="v1.29.3"
	I0408 18:44:32.254669       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 18:44:32.255793       1 config.go:188] "Starting service config controller"
	I0408 18:44:32.255827       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 18:44:32.255847       1 config.go:97] "Starting endpoint slice config controller"
	I0408 18:44:32.255851       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 18:44:32.262326       1 config.go:315] "Starting node config controller"
	I0408 18:44:32.262349       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 18:44:32.356122       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0408 18:44:32.356176       1 shared_informer.go:318] Caches are synced for service config
	I0408 18:44:32.364344       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f20482e874ffbaf82944fa5ed6def22a9b294dc995413f72fac1f32102df20a3] <==
	W0408 18:44:15.139356       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 18:44:15.139434       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 18:44:15.139590       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 18:44:15.139656       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0408 18:44:15.139786       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 18:44:15.139897       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0408 18:44:15.948188       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 18:44:15.948438       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0408 18:44:16.025537       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 18:44:16.025579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0408 18:44:16.069523       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 18:44:16.069655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 18:44:16.111558       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 18:44:16.111665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 18:44:16.125376       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 18:44:16.125420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 18:44:16.139057       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 18:44:16.139172       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 18:44:16.168271       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 18:44:16.168405       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0408 18:44:16.175005       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 18:44:16.175133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 18:44:16.463655       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 18:44:16.463699       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0408 18:44:18.710423       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 18:46:49 addons-038955 kubelet[1539]: I0408 18:46:49.563437    1539 scope.go:117] "RemoveContainer" containerID="29daa69376601276ef8ffeae8a1bb4c3f97c9169b30f588c0a109e3199e7aa19"
	Apr 08 18:46:49 addons-038955 kubelet[1539]: I0408 18:46:49.582024    1539 scope.go:117] "RemoveContainer" containerID="29daa69376601276ef8ffeae8a1bb4c3f97c9169b30f588c0a109e3199e7aa19"
	Apr 08 18:46:49 addons-038955 kubelet[1539]: E0408 18:46:49.587992    1539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29daa69376601276ef8ffeae8a1bb4c3f97c9169b30f588c0a109e3199e7aa19\": not found" containerID="29daa69376601276ef8ffeae8a1bb4c3f97c9169b30f588c0a109e3199e7aa19"
	Apr 08 18:46:49 addons-038955 kubelet[1539]: I0408 18:46:49.588051    1539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29daa69376601276ef8ffeae8a1bb4c3f97c9169b30f588c0a109e3199e7aa19"} err="failed to get container status \"29daa69376601276ef8ffeae8a1bb4c3f97c9169b30f588c0a109e3199e7aa19\": rpc error: code = NotFound desc = an error occurred when try to find container \"29daa69376601276ef8ffeae8a1bb4c3f97c9169b30f588c0a109e3199e7aa19\": not found"
	Apr 08 18:46:50 addons-038955 kubelet[1539]: I0408 18:46:50.044616    1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vh7c\" (UniqueName: \"kubernetes.io/projected/cc69f6d4-fec0-4693-8a72-7dd0c54c4001-kube-api-access-9vh7c\") pod \"cc69f6d4-fec0-4693-8a72-7dd0c54c4001\" (UID: \"cc69f6d4-fec0-4693-8a72-7dd0c54c4001\") "
	Apr 08 18:46:50 addons-038955 kubelet[1539]: I0408 18:46:50.046838    1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc69f6d4-fec0-4693-8a72-7dd0c54c4001-kube-api-access-9vh7c" (OuterVolumeSpecName: "kube-api-access-9vh7c") pod "cc69f6d4-fec0-4693-8a72-7dd0c54c4001" (UID: "cc69f6d4-fec0-4693-8a72-7dd0c54c4001"). InnerVolumeSpecName "kube-api-access-9vh7c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 08 18:46:50 addons-038955 kubelet[1539]: I0408 18:46:50.145449    1539 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9vh7c\" (UniqueName: \"kubernetes.io/projected/cc69f6d4-fec0-4693-8a72-7dd0c54c4001-kube-api-access-9vh7c\") on node \"addons-038955\" DevicePath \"\""
	Apr 08 18:46:50 addons-038955 kubelet[1539]: I0408 18:46:50.478064    1539 scope.go:117] "RemoveContainer" containerID="b086e3882b77194f9ede1634d1894ac4f1628cae7db39873999f0fa49d0f2983"
	Apr 08 18:46:50 addons-038955 kubelet[1539]: I0408 18:46:50.483227    1539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c128896b-c09b-4d7c-af6f-e1589234a9a5" path="/var/lib/kubelet/pods/c128896b-c09b-4d7c-af6f-e1589234a9a5/volumes"
	Apr 08 18:46:50 addons-038955 kubelet[1539]: I0408 18:46:50.568180    1539 scope.go:117] "RemoveContainer" containerID="536eba218c61bc954a50c73e38649adef0bd98ee11a6aed225eb7ce83441e6f3"
	Apr 08 18:46:51 addons-038955 kubelet[1539]: I0408 18:46:51.383855    1539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-d4k9f" podStartSLOduration=16.726081102 podStartE2EDuration="18.383803664s" podCreationTimestamp="2024-04-08 18:46:33 +0000 UTC" firstStartedPulling="2024-04-08 18:46:34.012619679 +0000 UTC m=+135.687772739" lastFinishedPulling="2024-04-08 18:46:35.670342241 +0000 UTC m=+137.345495301" observedRunningTime="2024-04-08 18:46:50.618532989 +0000 UTC m=+152.293686057" watchObservedRunningTime="2024-04-08 18:46:51.383803664 +0000 UTC m=+153.058956724"
	Apr 08 18:46:51 addons-038955 kubelet[1539]: I0408 18:46:51.578884    1539 scope.go:117] "RemoveContainer" containerID="b086e3882b77194f9ede1634d1894ac4f1628cae7db39873999f0fa49d0f2983"
	Apr 08 18:46:51 addons-038955 kubelet[1539]: I0408 18:46:51.579434    1539 scope.go:117] "RemoveContainer" containerID="e5b4135610beb35d302b6f59eb8013a866cbbd9702df7bf8e2eba68ef96c687d"
	Apr 08 18:46:51 addons-038955 kubelet[1539]: E0408 18:46:51.579720    1539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-d4k9f_default(1ec3a0b4-8d9e-430d-abc7-2269ce39a153)\"" pod="default/hello-world-app-5d77478584-d4k9f" podUID="1ec3a0b4-8d9e-430d-abc7-2269ce39a153"
	Apr 08 18:46:52 addons-038955 kubelet[1539]: I0408 18:46:52.481124    1539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6acbb13d-b73e-4533-bcfe-3b6afa5522fa" path="/var/lib/kubelet/pods/6acbb13d-b73e-4533-bcfe-3b6afa5522fa/volumes"
	Apr 08 18:46:52 addons-038955 kubelet[1539]: I0408 18:46:52.481570    1539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c809d9c-ed16-45c5-b5bd-14951c01fb45" path="/var/lib/kubelet/pods/6c809d9c-ed16-45c5-b5bd-14951c01fb45/volumes"
	Apr 08 18:46:52 addons-038955 kubelet[1539]: I0408 18:46:52.482056    1539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc69f6d4-fec0-4693-8a72-7dd0c54c4001" path="/var/lib/kubelet/pods/cc69f6d4-fec0-4693-8a72-7dd0c54c4001/volumes"
	Apr 08 18:46:54 addons-038955 kubelet[1539]: I0408 18:46:54.590203    1539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d6c38f0ee61b38f5b3aeced3d0b589571adf309d971d6d56474664442aeb98f"
	Apr 08 18:46:54 addons-038955 kubelet[1539]: I0408 18:46:54.675825    1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d676da3-cb45-427b-b7fe-54dec0a030de-webhook-cert\") pod \"1d676da3-cb45-427b-b7fe-54dec0a030de\" (UID: \"1d676da3-cb45-427b-b7fe-54dec0a030de\") "
	Apr 08 18:46:54 addons-038955 kubelet[1539]: I0408 18:46:54.675880    1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9wgk\" (UniqueName: \"kubernetes.io/projected/1d676da3-cb45-427b-b7fe-54dec0a030de-kube-api-access-g9wgk\") pod \"1d676da3-cb45-427b-b7fe-54dec0a030de\" (UID: \"1d676da3-cb45-427b-b7fe-54dec0a030de\") "
	Apr 08 18:46:54 addons-038955 kubelet[1539]: I0408 18:46:54.677897    1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d676da3-cb45-427b-b7fe-54dec0a030de-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1d676da3-cb45-427b-b7fe-54dec0a030de" (UID: "1d676da3-cb45-427b-b7fe-54dec0a030de"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 08 18:46:54 addons-038955 kubelet[1539]: I0408 18:46:54.680700    1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d676da3-cb45-427b-b7fe-54dec0a030de-kube-api-access-g9wgk" (OuterVolumeSpecName: "kube-api-access-g9wgk") pod "1d676da3-cb45-427b-b7fe-54dec0a030de" (UID: "1d676da3-cb45-427b-b7fe-54dec0a030de"). InnerVolumeSpecName "kube-api-access-g9wgk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 08 18:46:54 addons-038955 kubelet[1539]: I0408 18:46:54.776857    1539 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d676da3-cb45-427b-b7fe-54dec0a030de-webhook-cert\") on node \"addons-038955\" DevicePath \"\""
	Apr 08 18:46:54 addons-038955 kubelet[1539]: I0408 18:46:54.776905    1539 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g9wgk\" (UniqueName: \"kubernetes.io/projected/1d676da3-cb45-427b-b7fe-54dec0a030de-kube-api-access-g9wgk\") on node \"addons-038955\" DevicePath \"\""
	Apr 08 18:46:56 addons-038955 kubelet[1539]: I0408 18:46:56.480694    1539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d676da3-cb45-427b-b7fe-54dec0a030de" path="/var/lib/kubelet/pods/1d676da3-cb45-427b-b7fe-54dec0a030de/volumes"
	
	
	==> storage-provisioner [d2ab2c46c443e5de3179c56e949e1f5c6d51e094d83fcb9196be140d7e6f9e1a] <==
	I0408 18:44:36.632478       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 18:44:36.658641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 18:44:36.658682       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 18:44:36.697049       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 18:44:36.697475       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-038955_8a9ce726-c693-4029-b49f-03d607052085!
	I0408 18:44:36.697882       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fcdf98b9-1bc8-4ea8-ae21-ea5b09f12c85", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-038955_8a9ce726-c693-4029-b49f-03d607052085 became leader
	I0408 18:44:36.798148       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-038955_8a9ce726-c693-4029-b49f-03d607052085!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-038955 -n addons-038955
helpers_test.go:261: (dbg) Run:  kubectl --context addons-038955 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: task-pv-pod-restore
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-038955 describe pod task-pv-pod-restore
helpers_test.go:282: (dbg) kubectl --context addons-038955 describe pod task-pv-pod-restore:

                                                
                                                
-- stdout --
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-038955/192.168.49.2
	Start Time:       Mon, 08 Apr 2024 18:47:01 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-js82c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-js82c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  0s    default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-038955

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (38.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image load --daemon gcr.io/google-containers/addon-resizer:functional-435105 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-435105 image load --daemon gcr.io/google-containers/addon-resizer:functional-435105 --alsologtostderr: (3.963183429s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-435105" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image load --daemon gcr.io/google-containers/addon-resizer:functional-435105 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-435105 image load --daemon gcr.io/google-containers/addon-resizer:functional-435105 --alsologtostderr: (3.622604995s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-435105" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.003496325s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-435105
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image load --daemon gcr.io/google-containers/addon-resizer:functional-435105 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-435105 image load --daemon gcr.io/google-containers/addon-resizer:functional-435105 --alsologtostderr: (3.194032384s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-435105" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image save gcr.io/google-containers/addon-resizer:functional-435105 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0408 18:53:09.336939  875521 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:53:09.337173  875521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:53:09.337205  875521 out.go:304] Setting ErrFile to fd 2...
	I0408 18:53:09.337227  875521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:53:09.337489  875521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 18:53:09.338216  875521 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:53:09.338380  875521 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:53:09.338936  875521 cli_runner.go:164] Run: docker container inspect functional-435105 --format={{.State.Status}}
	I0408 18:53:09.354569  875521 ssh_runner.go:195] Run: systemctl --version
	I0408 18:53:09.354665  875521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-435105
	I0408 18:53:09.369908  875521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33580 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/functional-435105/id_rsa Username:docker}
	I0408 18:53:09.462612  875521 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0408 18:53:09.462681  875521 cache_images.go:254] Failed to load cached images for profile functional-435105. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0408 18:53:09.462702  875521 cache_images.go:262] succeeded pushing to: 
	I0408 18:53:09.462707  875521 cache_images.go:263] failed pushing to: functional-435105

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestScheduledStopUnix (37.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-693017 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-693017 --memory=2048 --driver=docker  --container-runtime=containerd: (33.307225711s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-693017 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-693017 -n scheduled-stop-693017
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-693017 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 980845 running but should have been killed on reschedule of stop
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-08 19:16:35.115208805 +0000 UTC m=+2014.532868451
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-693017
helpers_test.go:235: (dbg) docker inspect scheduled-stop-693017:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a7ce209cdc2f075506101ddc383d5aa539cbe28b9257e101e2eb6ffdbdb5b2c",
	        "Created": "2024-04-08T19:16:06.619422556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 978962,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-08T19:16:06.859239906Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8071b9dd214010e53befdd8360b63c717c30e750b027ce9f279f5c79f4d48a44",
	        "ResolvConfPath": "/var/lib/docker/containers/8a7ce209cdc2f075506101ddc383d5aa539cbe28b9257e101e2eb6ffdbdb5b2c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a7ce209cdc2f075506101ddc383d5aa539cbe28b9257e101e2eb6ffdbdb5b2c/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a7ce209cdc2f075506101ddc383d5aa539cbe28b9257e101e2eb6ffdbdb5b2c/hosts",
	        "LogPath": "/var/lib/docker/containers/8a7ce209cdc2f075506101ddc383d5aa539cbe28b9257e101e2eb6ffdbdb5b2c/8a7ce209cdc2f075506101ddc383d5aa539cbe28b9257e101e2eb6ffdbdb5b2c-json.log",
	        "Name": "/scheduled-stop-693017",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-693017:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-693017",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/971c467deb3e39ee9a3116b852aba7c28253c7f74a835c0c23e9aa3cf3f15441-init/diff:/var/lib/docker/overlay2/56d7d8514c63dab1b3fb6d26c1f92815f34275e9a0ff6f17f417c17da312f7ae/diff",
	                "MergedDir": "/var/lib/docker/overlay2/971c467deb3e39ee9a3116b852aba7c28253c7f74a835c0c23e9aa3cf3f15441/merged",
	                "UpperDir": "/var/lib/docker/overlay2/971c467deb3e39ee9a3116b852aba7c28253c7f74a835c0c23e9aa3cf3f15441/diff",
	                "WorkDir": "/var/lib/docker/overlay2/971c467deb3e39ee9a3116b852aba7c28253c7f74a835c0c23e9aa3cf3f15441/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-693017",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-693017/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-693017",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-693017",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-693017",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3876045f7b785e5f018a5c806fec21d3dc38b1d9469ef3bba3d609b5bdc2ce97",
	            "SandboxKey": "/var/run/docker/netns/3876045f7b78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33765"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33764"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33761"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33763"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33762"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-693017": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "058be509508000281645fe128d363a64d930d72cc779415fddcd3ed0b23ef8bc",
	                    "EndpointID": "21c756e0010683486c3b590d8e8498ef995c411a75c9f8f7f111116780f59c52",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "scheduled-stop-693017",
	                        "8a7ce209cdc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-693017 -n scheduled-stop-693017
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-693017 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-693017 logs -n 25: (1.058092775s)
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p multinode-821774            | multinode-821774      | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:10 UTC | 08 Apr 24 19:11 UTC |
	| start   | -p multinode-821774            | multinode-821774      | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:11 UTC | 08 Apr 24 19:12 UTC |
	|         | --wait=true -v=8               |                       |         |                |                     |                     |
	|         | --alsologtostderr              |                       |         |                |                     |                     |
	| node    | list -p multinode-821774       | multinode-821774      | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:12 UTC |                     |
	| node    | multinode-821774 node delete   | multinode-821774      | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:12 UTC | 08 Apr 24 19:12 UTC |
	|         | m03                            |                       |         |                |                     |                     |
	| stop    | multinode-821774 stop          | multinode-821774      | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:12 UTC | 08 Apr 24 19:12 UTC |
	| start   | -p multinode-821774            | multinode-821774      | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:12 UTC | 08 Apr 24 19:13 UTC |
	|         | --wait=true -v=8               |                       |         |                |                     |                     |
	|         | --alsologtostderr              |                       |         |                |                     |                     |
	|         | --driver=docker                |                       |         |                |                     |                     |
	|         | --container-runtime=containerd |                       |         |                |                     |                     |
	| node    | list -p multinode-821774       | multinode-821774      | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:13 UTC |                     |
	| start   | -p multinode-821774-m02        | multinode-821774-m02  | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:13 UTC |                     |
	|         | --driver=docker                |                       |         |                |                     |                     |
	|         | --container-runtime=containerd |                       |         |                |                     |                     |
	| start   | -p multinode-821774-m03        | multinode-821774-m03  | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:13 UTC | 08 Apr 24 19:14 UTC |
	|         | --driver=docker                |                       |         |                |                     |                     |
	|         | --container-runtime=containerd |                       |         |                |                     |                     |
	| node    | add -p multinode-821774        | multinode-821774      | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:14 UTC |                     |
	| delete  | -p multinode-821774-m03        | multinode-821774-m03  | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:14 UTC | 08 Apr 24 19:14 UTC |
	| delete  | -p multinode-821774            | multinode-821774      | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:14 UTC | 08 Apr 24 19:14 UTC |
	| start   | -p test-preload-752141         | test-preload-752141   | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:14 UTC | 08 Apr 24 19:15 UTC |
	|         | --memory=2200                  |                       |         |                |                     |                     |
	|         | --alsologtostderr              |                       |         |                |                     |                     |
	|         | --wait=true --preload=false    |                       |         |                |                     |                     |
	|         | --driver=docker                |                       |         |                |                     |                     |
	|         | --container-runtime=containerd |                       |         |                |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |                |                     |                     |
	| image   | test-preload-752141 image pull | test-preload-752141   | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:15 UTC | 08 Apr 24 19:15 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |                |                     |                     |
	| stop    | -p test-preload-752141         | test-preload-752141   | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:15 UTC | 08 Apr 24 19:15 UTC |
	| start   | -p test-preload-752141         | test-preload-752141   | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:15 UTC | 08 Apr 24 19:15 UTC |
	|         | --memory=2200                  |                       |         |                |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |                |                     |                     |
	|         | --wait=true --driver=docker    |                       |         |                |                     |                     |
	|         | --container-runtime=containerd |                       |         |                |                     |                     |
	| image   | test-preload-752141 image list | test-preload-752141   | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:15 UTC | 08 Apr 24 19:15 UTC |
	| delete  | -p test-preload-752141         | test-preload-752141   | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:15 UTC | 08 Apr 24 19:16 UTC |
	| start   | -p scheduled-stop-693017       | scheduled-stop-693017 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:16 UTC | 08 Apr 24 19:16 UTC |
	|         | --memory=2048 --driver=docker  |                       |         |                |                     |                     |
	|         | --container-runtime=containerd |                       |         |                |                     |                     |
	| stop    | -p scheduled-stop-693017       | scheduled-stop-693017 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:16 UTC |                     |
	|         | --schedule 5m                  |                       |         |                |                     |                     |
	| stop    | -p scheduled-stop-693017       | scheduled-stop-693017 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:16 UTC |                     |
	|         | --schedule 5m                  |                       |         |                |                     |                     |
	| stop    | -p scheduled-stop-693017       | scheduled-stop-693017 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:16 UTC |                     |
	|         | --schedule 5m                  |                       |         |                |                     |                     |
	| stop    | -p scheduled-stop-693017       | scheduled-stop-693017 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:16 UTC |                     |
	|         | --schedule 15s                 |                       |         |                |                     |                     |
	| stop    | -p scheduled-stop-693017       | scheduled-stop-693017 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:16 UTC |                     |
	|         | --schedule 15s                 |                       |         |                |                     |                     |
	| stop    | -p scheduled-stop-693017       | scheduled-stop-693017 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:16 UTC |                     |
	|         | --schedule 15s                 |                       |         |                |                     |                     |
	|---------|--------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 19:16:01
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 19:16:01.347611  978532 out.go:291] Setting OutFile to fd 1 ...
	I0408 19:16:01.347742  978532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:16:01.347746  978532 out.go:304] Setting ErrFile to fd 2...
	I0408 19:16:01.347750  978532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:16:01.348038  978532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 19:16:01.348590  978532 out.go:298] Setting JSON to false
	I0408 19:16:01.349594  978532 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14306,"bootTime":1712589456,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0408 19:16:01.349678  978532 start.go:139] virtualization:  
	I0408 19:16:01.352430  978532 out.go:177] * [scheduled-stop-693017] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0408 19:16:01.355647  978532 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 19:16:01.358172  978532 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:16:01.355716  978532 notify.go:220] Checking for updates...
	I0408 19:16:01.360425  978532 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 19:16:01.362781  978532 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	I0408 19:16:01.364638  978532 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0408 19:16:01.367675  978532 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:16:01.369939  978532 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 19:16:01.390389  978532 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0408 19:16:01.390497  978532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 19:16:01.460711  978532 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2024-04-08 19:16:01.447397815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 19:16:01.460805  978532 docker.go:295] overlay module found
	I0408 19:16:01.463350  978532 out.go:177] * Using the docker driver based on user configuration
	I0408 19:16:01.465279  978532 start.go:297] selected driver: docker
	I0408 19:16:01.465289  978532 start.go:901] validating driver "docker" against <nil>
	I0408 19:16:01.465302  978532 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:16:01.466065  978532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 19:16:01.524493  978532 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2024-04-08 19:16:01.512917418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 19:16:01.524710  978532 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 19:16:01.524968  978532 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 19:16:01.527933  978532 out.go:177] * Using Docker driver with root privileges
	I0408 19:16:01.529932  978532 cni.go:84] Creating CNI manager for ""
	I0408 19:16:01.529943  978532 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 19:16:01.529952  978532 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 19:16:01.530097  978532 start.go:340] cluster config:
	{Name:scheduled-stop-693017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:scheduled-stop-693017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:16:01.533785  978532 out.go:177] * Starting "scheduled-stop-693017" primary control-plane node in "scheduled-stop-693017" cluster
	I0408 19:16:01.535690  978532 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0408 19:16:01.537892  978532 out.go:177] * Pulling base image v0.0.43-1712593525-18585 ...
	I0408 19:16:01.540003  978532 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 19:16:01.540042  978532 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0408 19:16:01.540049  978532 cache.go:56] Caching tarball of preloaded images
	I0408 19:16:01.540080  978532 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local docker daemon
	I0408 19:16:01.540124  978532 preload.go:173] Found /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 19:16:01.540133  978532 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0408 19:16:01.540472  978532 profile.go:143] Saving config to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/config.json ...
	I0408 19:16:01.540496  978532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/config.json: {Name:mkb71a26063fdadf86b3fd4837479cfbaa22537d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:16:01.557178  978532 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local docker daemon, skipping pull
	I0408 19:16:01.557194  978532 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd exists in daemon, skipping load
	I0408 19:16:01.557216  978532 cache.go:194] Successfully downloaded all kic artifacts
	I0408 19:16:01.557245  978532 start.go:360] acquireMachinesLock for scheduled-stop-693017: {Name:mk52ebec5b1e75c7a8e640c1bc62898ec38314a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:16:01.557373  978532 start.go:364] duration metric: took 110.25µs to acquireMachinesLock for "scheduled-stop-693017"
	I0408 19:16:01.557401  978532 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-693017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:scheduled-stop-693017 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0408 19:16:01.557478  978532 start.go:125] createHost starting for "" (driver="docker")
	I0408 19:16:01.560247  978532 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0408 19:16:01.560525  978532 start.go:159] libmachine.API.Create for "scheduled-stop-693017" (driver="docker")
	I0408 19:16:01.560560  978532 client.go:168] LocalClient.Create starting
	I0408 19:16:01.560631  978532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem
	I0408 19:16:01.560664  978532 main.go:141] libmachine: Decoding PEM data...
	I0408 19:16:01.560678  978532 main.go:141] libmachine: Parsing certificate...
	I0408 19:16:01.560732  978532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem
	I0408 19:16:01.560748  978532 main.go:141] libmachine: Decoding PEM data...
	I0408 19:16:01.560756  978532 main.go:141] libmachine: Parsing certificate...
	I0408 19:16:01.561152  978532 cli_runner.go:164] Run: docker network inspect scheduled-stop-693017 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0408 19:16:01.576237  978532 cli_runner.go:211] docker network inspect scheduled-stop-693017 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0408 19:16:01.576316  978532 network_create.go:281] running [docker network inspect scheduled-stop-693017] to gather additional debugging logs...
	I0408 19:16:01.576339  978532 cli_runner.go:164] Run: docker network inspect scheduled-stop-693017
	W0408 19:16:01.590424  978532 cli_runner.go:211] docker network inspect scheduled-stop-693017 returned with exit code 1
	I0408 19:16:01.590446  978532 network_create.go:284] error running [docker network inspect scheduled-stop-693017]: docker network inspect scheduled-stop-693017: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-693017 not found
	I0408 19:16:01.590458  978532 network_create.go:286] output of [docker network inspect scheduled-stop-693017]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-693017 not found
	
	** /stderr **
	I0408 19:16:01.590584  978532 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0408 19:16:01.605325  978532 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a63f63e60f29 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:71:a8:58:39} reservation:<nil>}
	I0408 19:16:01.605685  978532 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b78628e149cf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:33:39:ed:9d} reservation:<nil>}
	I0408 19:16:01.606115  978532 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-635750c08010 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:e3:10:6f:81} reservation:<nil>}
	I0408 19:16:01.606551  978532 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002546d10}
	I0408 19:16:01.606569  978532 network_create.go:124] attempt to create docker network scheduled-stop-693017 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0408 19:16:01.606640  978532 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-693017 scheduled-stop-693017
	I0408 19:16:01.673015  978532 network_create.go:108] docker network scheduled-stop-693017 192.168.76.0/24 created
	I0408 19:16:01.673040  978532 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-693017" container
	I0408 19:16:01.673131  978532 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0408 19:16:01.687626  978532 cli_runner.go:164] Run: docker volume create scheduled-stop-693017 --label name.minikube.sigs.k8s.io=scheduled-stop-693017 --label created_by.minikube.sigs.k8s.io=true
	I0408 19:16:01.705414  978532 oci.go:103] Successfully created a docker volume scheduled-stop-693017
	I0408 19:16:01.705486  978532 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-693017-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-693017 --entrypoint /usr/bin/test -v scheduled-stop-693017:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd -d /var/lib
	I0408 19:16:02.251495  978532 oci.go:107] Successfully prepared a docker volume scheduled-stop-693017
	I0408 19:16:02.251533  978532 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 19:16:02.251552  978532 kic.go:194] Starting extracting preloaded images to volume ...
	I0408 19:16:02.251632  978532 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-693017:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd -I lz4 -xf /preloaded.tar -C /extractDir
	I0408 19:16:06.554872  978532 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-693017:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd -I lz4 -xf /preloaded.tar -C /extractDir: (4.30320712s)
	I0408 19:16:06.554894  978532 kic.go:203] duration metric: took 4.303339138s to extract preloaded images to volume ...
	W0408 19:16:06.555048  978532 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0408 19:16:06.555158  978532 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0408 19:16:06.605291  978532 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-693017 --name scheduled-stop-693017 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-693017 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-693017 --network scheduled-stop-693017 --ip 192.168.76.2 --volume scheduled-stop-693017:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd
	I0408 19:16:06.867326  978532 cli_runner.go:164] Run: docker container inspect scheduled-stop-693017 --format={{.State.Running}}
	I0408 19:16:06.888417  978532 cli_runner.go:164] Run: docker container inspect scheduled-stop-693017 --format={{.State.Status}}
	I0408 19:16:06.911129  978532 cli_runner.go:164] Run: docker exec scheduled-stop-693017 stat /var/lib/dpkg/alternatives/iptables
	I0408 19:16:06.979694  978532 oci.go:144] the created container "scheduled-stop-693017" has a running status.
	I0408 19:16:06.979712  978532 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18585-838483/.minikube/machines/scheduled-stop-693017/id_rsa...
	I0408 19:16:07.195407  978532 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18585-838483/.minikube/machines/scheduled-stop-693017/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0408 19:16:07.215976  978532 cli_runner.go:164] Run: docker container inspect scheduled-stop-693017 --format={{.State.Status}}
	I0408 19:16:07.235441  978532 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0408 19:16:07.235452  978532 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-693017 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0408 19:16:07.298547  978532 cli_runner.go:164] Run: docker container inspect scheduled-stop-693017 --format={{.State.Status}}
	I0408 19:16:07.318342  978532 machine.go:94] provisionDockerMachine start ...
	I0408 19:16:07.318422  978532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-693017
	I0408 19:16:07.344995  978532 main.go:141] libmachine: Using SSH client type: native
	I0408 19:16:07.345268  978532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33765 <nil> <nil>}
	I0408 19:16:07.345275  978532 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 19:16:07.345939  978532 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0408 19:16:10.485438  978532 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-693017
	
	I0408 19:16:10.485452  978532 ubuntu.go:169] provisioning hostname "scheduled-stop-693017"
	I0408 19:16:10.485527  978532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-693017
	I0408 19:16:10.501252  978532 main.go:141] libmachine: Using SSH client type: native
	I0408 19:16:10.501490  978532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33765 <nil> <nil>}
	I0408 19:16:10.501499  978532 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-693017 && echo "scheduled-stop-693017" | sudo tee /etc/hostname
	I0408 19:16:10.649830  978532 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-693017
	
	I0408 19:16:10.649900  978532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-693017
	I0408 19:16:10.665521  978532 main.go:141] libmachine: Using SSH client type: native
	I0408 19:16:10.665774  978532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33765 <nil> <nil>}
	I0408 19:16:10.665789  978532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-693017' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-693017/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-693017' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:16:10.802163  978532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:16:10.802179  978532 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18585-838483/.minikube CaCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18585-838483/.minikube}
	I0408 19:16:10.802202  978532 ubuntu.go:177] setting up certificates
	I0408 19:16:10.802212  978532 provision.go:84] configureAuth start
	I0408 19:16:10.802275  978532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-693017
	I0408 19:16:10.817327  978532 provision.go:143] copyHostCerts
	I0408 19:16:10.817383  978532 exec_runner.go:144] found /home/jenkins/minikube-integration/18585-838483/.minikube/ca.pem, removing ...
	I0408 19:16:10.817391  978532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18585-838483/.minikube/ca.pem
	I0408 19:16:10.817467  978532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/ca.pem (1082 bytes)
	I0408 19:16:10.817612  978532 exec_runner.go:144] found /home/jenkins/minikube-integration/18585-838483/.minikube/cert.pem, removing ...
	I0408 19:16:10.817617  978532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18585-838483/.minikube/cert.pem
	I0408 19:16:10.817649  978532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/cert.pem (1123 bytes)
	I0408 19:16:10.817708  978532 exec_runner.go:144] found /home/jenkins/minikube-integration/18585-838483/.minikube/key.pem, removing ...
	I0408 19:16:10.817711  978532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18585-838483/.minikube/key.pem
	I0408 19:16:10.817739  978532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/key.pem (1675 bytes)
	I0408 19:16:10.817786  978532 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-693017 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-693017]
	I0408 19:16:11.070844  978532 provision.go:177] copyRemoteCerts
	I0408 19:16:11.070918  978532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:16:11.070962  978532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-693017
	I0408 19:16:11.087596  978532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33765 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/scheduled-stop-693017/id_rsa Username:docker}
	I0408 19:16:11.186733  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:16:11.210219  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 19:16:11.233283  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 19:16:11.256813  978532 provision.go:87] duration metric: took 454.588673ms to configureAuth
	I0408 19:16:11.256832  978532 ubuntu.go:193] setting minikube options for container-runtime
	I0408 19:16:11.257017  978532 config.go:182] Loaded profile config "scheduled-stop-693017": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:16:11.257022  978532 machine.go:97] duration metric: took 3.938672055s to provisionDockerMachine
	I0408 19:16:11.257028  978532 client.go:171] duration metric: took 9.696463422s to LocalClient.Create
	I0408 19:16:11.257039  978532 start.go:167] duration metric: took 9.696516901s to libmachine.API.Create "scheduled-stop-693017"
	I0408 19:16:11.257045  978532 start.go:293] postStartSetup for "scheduled-stop-693017" (driver="docker")
	I0408 19:16:11.257054  978532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:16:11.257110  978532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:16:11.257145  978532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-693017
	I0408 19:16:11.272407  978532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33765 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/scheduled-stop-693017/id_rsa Username:docker}
	I0408 19:16:11.372532  978532 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:16:11.376776  978532 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0408 19:16:11.376801  978532 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0408 19:16:11.376810  978532 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0408 19:16:11.376816  978532 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0408 19:16:11.376826  978532 filesync.go:126] Scanning /home/jenkins/minikube-integration/18585-838483/.minikube/addons for local assets ...
	I0408 19:16:11.376885  978532 filesync.go:126] Scanning /home/jenkins/minikube-integration/18585-838483/.minikube/files for local assets ...
	I0408 19:16:11.376974  978532 filesync.go:149] local asset: /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem -> 8439002.pem in /etc/ssl/certs
	I0408 19:16:11.377077  978532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:16:11.385936  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem --> /etc/ssl/certs/8439002.pem (1708 bytes)
	I0408 19:16:11.409770  978532 start.go:296] duration metric: took 152.711479ms for postStartSetup
	I0408 19:16:11.410196  978532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-693017
	I0408 19:16:11.424297  978532 profile.go:143] Saving config to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/config.json ...
	I0408 19:16:11.424581  978532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 19:16:11.424627  978532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-693017
	I0408 19:16:11.440762  978532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33765 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/scheduled-stop-693017/id_rsa Username:docker}
	I0408 19:16:11.536082  978532 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0408 19:16:11.540772  978532 start.go:128] duration metric: took 9.983279998s to createHost
	I0408 19:16:11.540791  978532 start.go:83] releasing machines lock for "scheduled-stop-693017", held for 9.983408905s
	I0408 19:16:11.540880  978532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-693017
	I0408 19:16:11.557286  978532 ssh_runner.go:195] Run: cat /version.json
	I0408 19:16:11.557306  978532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:16:11.557328  978532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-693017
	I0408 19:16:11.557355  978532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-693017
	I0408 19:16:11.574242  978532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33765 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/scheduled-stop-693017/id_rsa Username:docker}
	I0408 19:16:11.575980  978532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33765 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/scheduled-stop-693017/id_rsa Username:docker}
	I0408 19:16:11.782339  978532 ssh_runner.go:195] Run: systemctl --version
	I0408 19:16:11.788697  978532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 19:16:11.792913  978532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0408 19:16:11.819456  978532 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0408 19:16:11.819522  978532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:16:11.848748  978532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0408 19:16:11.848764  978532 start.go:494] detecting cgroup driver to use...
	I0408 19:16:11.848797  978532 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0408 19:16:11.848865  978532 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 19:16:11.861799  978532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 19:16:11.873594  978532 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:16:11.873646  978532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:16:11.888059  978532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:16:11.902216  978532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:16:11.981610  978532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:16:12.095298  978532 docker.go:233] disabling docker service ...
	I0408 19:16:12.095366  978532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:16:12.119136  978532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:16:12.131412  978532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:16:12.223045  978532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:16:12.314950  978532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:16:12.326587  978532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:16:12.342445  978532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0408 19:16:12.352213  978532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 19:16:12.362495  978532 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 19:16:12.362554  978532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 19:16:12.372176  978532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 19:16:12.381887  978532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 19:16:12.391432  978532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 19:16:12.400885  978532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:16:12.410203  978532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 19:16:12.419860  978532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 19:16:12.430088  978532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 19:16:12.439790  978532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:16:12.448065  978532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:16:12.456199  978532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:16:12.534403  978532 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 19:16:12.666859  978532 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0408 19:16:12.666935  978532 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0408 19:16:12.670524  978532 start.go:562] Will wait 60s for crictl version
	I0408 19:16:12.670577  978532 ssh_runner.go:195] Run: which crictl
	I0408 19:16:12.673926  978532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:16:12.709196  978532 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0408 19:16:12.709257  978532 ssh_runner.go:195] Run: containerd --version
	I0408 19:16:12.732101  978532 ssh_runner.go:195] Run: containerd --version
	I0408 19:16:12.755874  978532 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0408 19:16:12.757640  978532 cli_runner.go:164] Run: docker network inspect scheduled-stop-693017 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0408 19:16:12.772460  978532 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0408 19:16:12.776098  978532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:16:12.786936  978532 kubeadm.go:877] updating cluster {Name:scheduled-stop-693017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:scheduled-stop-693017 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:16:12.787051  978532 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 19:16:12.787113  978532 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:16:12.823168  978532 containerd.go:627] all images are preloaded for containerd runtime.
	I0408 19:16:12.823180  978532 containerd.go:534] Images already preloaded, skipping extraction
	I0408 19:16:12.823240  978532 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:16:12.859489  978532 containerd.go:627] all images are preloaded for containerd runtime.
	I0408 19:16:12.859500  978532 cache_images.go:84] Images are preloaded, skipping loading
	I0408 19:16:12.859507  978532 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.29.3 containerd true true} ...
	I0408 19:16:12.859597  978532 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-693017 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:scheduled-stop-693017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:16:12.859657  978532 ssh_runner.go:195] Run: sudo crictl info
	I0408 19:16:12.898662  978532 cni.go:84] Creating CNI manager for ""
	I0408 19:16:12.898675  978532 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 19:16:12.898685  978532 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 19:16:12.898704  978532 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-693017 NodeName:scheduled-stop-693017 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 19:16:12.898829  978532 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "scheduled-stop-693017"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:16:12.898891  978532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 19:16:12.907531  978532 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:16:12.907589  978532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:16:12.916338  978532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0408 19:16:12.934178  978532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:16:12.952030  978532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I0408 19:16:12.969285  978532 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0408 19:16:12.972644  978532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:16:12.983108  978532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:16:13.074169  978532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:16:13.094429  978532 certs.go:68] Setting up /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017 for IP: 192.168.76.2
	I0408 19:16:13.094439  978532 certs.go:194] generating shared ca certs ...
	I0408 19:16:13.094453  978532 certs.go:226] acquiring lock for ca certs: {Name:mkee58842a3256e0a530a93e9e38afd9941f0741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:16:13.094597  978532 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18585-838483/.minikube/ca.key
	I0408 19:16:13.094639  978532 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.key
	I0408 19:16:13.094645  978532 certs.go:256] generating profile certs ...
	I0408 19:16:13.094697  978532 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/client.key
	I0408 19:16:13.094707  978532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/client.crt with IP's: []
	I0408 19:16:13.327314  978532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/client.crt ...
	I0408 19:16:13.327329  978532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/client.crt: {Name:mk488899b330060e2d729cff08b157785b9abeff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:16:13.327538  978532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/client.key ...
	I0408 19:16:13.327546  978532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/client.key: {Name:mk803189694563ede688d0eac81476ce18096d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:16:13.327646  978532 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.key.939846ee
	I0408 19:16:13.327658  978532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.crt.939846ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0408 19:16:14.005880  978532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.crt.939846ee ...
	I0408 19:16:14.005898  978532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.crt.939846ee: {Name:mkbd49f40a69d50e2eaf861233ad4b139069d5b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:16:14.006125  978532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.key.939846ee ...
	I0408 19:16:14.006134  978532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.key.939846ee: {Name:mk86d83efcaa03951ab57468c35767b825082883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:16:14.006206  978532 certs.go:381] copying /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.crt.939846ee -> /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.crt
	I0408 19:16:14.006291  978532 certs.go:385] copying /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.key.939846ee -> /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.key
	I0408 19:16:14.006351  978532 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/proxy-client.key
	I0408 19:16:14.006368  978532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/proxy-client.crt with IP's: []
	I0408 19:16:14.221431  978532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/proxy-client.crt ...
	I0408 19:16:14.221446  978532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/proxy-client.crt: {Name:mk4da0c051d1b2e111c4ba960c2e890b3fc0ea9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:16:14.221634  978532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/proxy-client.key ...
	I0408 19:16:14.221641  978532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/proxy-client.key: {Name:mkb7babcb47b10108170b82c54cfb1e16021fe28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:16:14.221834  978532 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/843900.pem (1338 bytes)
	W0408 19:16:14.221871  978532 certs.go:480] ignoring /home/jenkins/minikube-integration/18585-838483/.minikube/certs/843900_empty.pem, impossibly tiny 0 bytes
	I0408 19:16:14.221878  978532 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:16:14.221899  978532 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:16:14.221928  978532 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:16:14.221948  978532 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem (1675 bytes)
	I0408 19:16:14.221990  978532 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem (1708 bytes)
	I0408 19:16:14.222633  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:16:14.246545  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:16:14.270581  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:16:14.293689  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 19:16:14.316610  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 19:16:14.340078  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 19:16:14.364530  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:16:14.387944  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/scheduled-stop-693017/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 19:16:14.411720  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem --> /usr/share/ca-certificates/8439002.pem (1708 bytes)
	I0408 19:16:14.434901  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:16:14.458674  978532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/certs/843900.pem --> /usr/share/ca-certificates/843900.pem (1338 bytes)
	I0408 19:16:14.481734  978532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 19:16:14.499086  978532 ssh_runner.go:195] Run: openssl version
	I0408 19:16:14.504413  978532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8439002.pem && ln -fs /usr/share/ca-certificates/8439002.pem /etc/ssl/certs/8439002.pem"
	I0408 19:16:14.513981  978532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8439002.pem
	I0408 19:16:14.517325  978532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:50 /usr/share/ca-certificates/8439002.pem
	I0408 19:16:14.517381  978532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8439002.pem
	I0408 19:16:14.524173  978532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8439002.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:16:14.535084  978532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:16:14.544121  978532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:16:14.547620  978532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:16:14.547675  978532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:16:14.554827  978532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:16:14.564038  978532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/843900.pem && ln -fs /usr/share/ca-certificates/843900.pem /etc/ssl/certs/843900.pem"
	I0408 19:16:14.572749  978532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/843900.pem
	I0408 19:16:14.576134  978532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:50 /usr/share/ca-certificates/843900.pem
	I0408 19:16:14.576187  978532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/843900.pem
	I0408 19:16:14.583161  978532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/843900.pem /etc/ssl/certs/51391683.0"
	I0408 19:16:14.592499  978532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:16:14.595642  978532 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 19:16:14.595688  978532 kubeadm.go:391] StartCluster: {Name:scheduled-stop-693017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:scheduled-stop-693017 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:16:14.595761  978532 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0408 19:16:14.595828  978532 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:16:14.643329  978532 cri.go:89] found id: ""
	I0408 19:16:14.643404  978532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 19:16:14.653512  978532 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 19:16:14.663016  978532 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0408 19:16:14.663073  978532 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:16:14.674110  978532 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:16:14.674122  978532 kubeadm.go:156] found existing configuration files:
	
	I0408 19:16:14.674171  978532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:16:14.683982  978532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:16:14.684035  978532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:16:14.693267  978532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:16:14.706901  978532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:16:14.706982  978532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:16:14.715429  978532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:16:14.724011  978532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:16:14.724062  978532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:16:14.732839  978532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:16:14.741844  978532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:16:14.741896  978532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:16:14.750870  978532 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0408 19:16:14.800012  978532 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 19:16:14.800060  978532 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 19:16:14.844511  978532 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0408 19:16:14.844574  978532 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0408 19:16:14.844606  978532 kubeadm.go:309] OS: Linux
	I0408 19:16:14.844650  978532 kubeadm.go:309] CGROUPS_CPU: enabled
	I0408 19:16:14.844695  978532 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0408 19:16:14.844740  978532 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0408 19:16:14.844785  978532 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0408 19:16:14.844830  978532 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0408 19:16:14.844876  978532 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0408 19:16:14.844934  978532 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0408 19:16:14.844980  978532 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0408 19:16:14.845023  978532 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0408 19:16:14.916112  978532 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 19:16:14.916210  978532 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 19:16:14.916300  978532 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 19:16:15.162389  978532 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 19:16:15.166444  978532 out.go:204]   - Generating certificates and keys ...
	I0408 19:16:15.166547  978532 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 19:16:15.166610  978532 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 19:16:15.561425  978532 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 19:16:15.905551  978532 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0408 19:16:16.543604  978532 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0408 19:16:17.511978  978532 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0408 19:16:18.007344  978532 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0408 19:16:18.007692  978532 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-693017] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0408 19:16:18.429064  978532 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0408 19:16:18.429387  978532 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-693017] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0408 19:16:18.942098  978532 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 19:16:19.122406  978532 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 19:16:19.953054  978532 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0408 19:16:19.953249  978532 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 19:16:21.172445  978532 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 19:16:21.683566  978532 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 19:16:22.243878  978532 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 19:16:22.825277  978532 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 19:16:23.582605  978532 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 19:16:23.583266  978532 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 19:16:23.588096  978532 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 19:16:23.590597  978532 out.go:204]   - Booting up control plane ...
	I0408 19:16:23.590706  978532 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 19:16:23.590780  978532 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 19:16:23.591648  978532 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 19:16:23.616804  978532 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 19:16:23.617381  978532 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 19:16:23.617435  978532 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 19:16:23.738479  978532 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 19:16:31.234408  978532 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.501996 seconds
	I0408 19:16:31.255137  978532 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 19:16:31.274529  978532 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 19:16:31.799044  978532 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 19:16:31.799242  978532 kubeadm.go:309] [mark-control-plane] Marking the node scheduled-stop-693017 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 19:16:32.310773  978532 kubeadm.go:309] [bootstrap-token] Using token: hoh4bh.zskqg35408baeygq
	I0408 19:16:32.312595  978532 out.go:204]   - Configuring RBAC rules ...
	I0408 19:16:32.312711  978532 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 19:16:32.317578  978532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 19:16:32.325320  978532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 19:16:32.330624  978532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 19:16:32.334423  978532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 19:16:32.339975  978532 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 19:16:32.353721  978532 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 19:16:32.577535  978532 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 19:16:32.723851  978532 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 19:16:32.725365  978532 kubeadm.go:309] 
	I0408 19:16:32.725430  978532 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 19:16:32.725434  978532 kubeadm.go:309] 
	I0408 19:16:32.725508  978532 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 19:16:32.725511  978532 kubeadm.go:309] 
	I0408 19:16:32.725535  978532 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 19:16:32.725923  978532 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 19:16:32.725987  978532 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 19:16:32.725991  978532 kubeadm.go:309] 
	I0408 19:16:32.726051  978532 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 19:16:32.726073  978532 kubeadm.go:309] 
	I0408 19:16:32.726119  978532 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 19:16:32.726122  978532 kubeadm.go:309] 
	I0408 19:16:32.726173  978532 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 19:16:32.726244  978532 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 19:16:32.726310  978532 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 19:16:32.726314  978532 kubeadm.go:309] 
	I0408 19:16:32.726576  978532 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 19:16:32.726658  978532 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 19:16:32.726661  978532 kubeadm.go:309] 
	I0408 19:16:32.726916  978532 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hoh4bh.zskqg35408baeygq \
	I0408 19:16:32.727017  978532 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:40732441685d52f358537af2255d867bbdb5cf15cf08de16fca49474be9f966b \
	I0408 19:16:32.727196  978532 kubeadm.go:309] 	--control-plane 
	I0408 19:16:32.727202  978532 kubeadm.go:309] 
	I0408 19:16:32.727430  978532 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 19:16:32.727436  978532 kubeadm.go:309] 
	I0408 19:16:32.727676  978532 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hoh4bh.zskqg35408baeygq \
	I0408 19:16:32.727990  978532 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:40732441685d52f358537af2255d867bbdb5cf15cf08de16fca49474be9f966b 
	I0408 19:16:32.732114  978532 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1056-aws\n", err: exit status 1
	I0408 19:16:32.732220  978532 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:16:32.732236  978532 cni.go:84] Creating CNI manager for ""
	I0408 19:16:32.732243  978532 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 19:16:32.734684  978532 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0408 19:16:32.736488  978532 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0408 19:16:32.741115  978532 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0408 19:16:32.741131  978532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0408 19:16:32.782917  978532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0408 19:16:33.165892  978532 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 19:16:33.166056  978532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:16:33.166064  978532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-693017 minikube.k8s.io/updated_at=2024_04_08T19_16_33_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021 minikube.k8s.io/name=scheduled-stop-693017 minikube.k8s.io/primary=true
	I0408 19:16:33.303813  978532 ops.go:34] apiserver oom_adj: -16
	I0408 19:16:33.303836  978532 kubeadm.go:1107] duration metric: took 137.866987ms to wait for elevateKubeSystemPrivileges
	W0408 19:16:33.303856  978532 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 19:16:33.303861  978532 kubeadm.go:393] duration metric: took 18.708178583s to StartCluster
	I0408 19:16:33.303876  978532 settings.go:142] acquiring lock: {Name:mk5026d653ab6560d4c2e7a68e9bc77339a3813a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:16:33.303932  978532 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 19:16:33.304645  978532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/kubeconfig: {Name:mk2667c6d217e28cc639f1cedf47734a14602005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:16:33.304845  978532 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0408 19:16:33.308109  978532 out.go:177] * Verifying Kubernetes components...
	I0408 19:16:33.304953  978532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 19:16:33.305118  978532 config.go:182] Loaded profile config "scheduled-stop-693017": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:16:33.305134  978532 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 19:16:33.309833  978532 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-693017"
	I0408 19:16:33.309863  978532 addons.go:234] Setting addon storage-provisioner=true in "scheduled-stop-693017"
	I0408 19:16:33.309908  978532 host.go:66] Checking if "scheduled-stop-693017" exists ...
	I0408 19:16:33.310058  978532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:16:33.310148  978532 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-693017"
	I0408 19:16:33.310176  978532 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-693017"
	I0408 19:16:33.310466  978532 cli_runner.go:164] Run: docker container inspect scheduled-stop-693017 --format={{.State.Status}}
	I0408 19:16:33.310495  978532 cli_runner.go:164] Run: docker container inspect scheduled-stop-693017 --format={{.State.Status}}
	I0408 19:16:33.338328  978532 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:16:33.340253  978532 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:16:33.340265  978532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 19:16:33.340339  978532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-693017
	I0408 19:16:33.348084  978532 addons.go:234] Setting addon default-storageclass=true in "scheduled-stop-693017"
	I0408 19:16:33.348111  978532 host.go:66] Checking if "scheduled-stop-693017" exists ...
	I0408 19:16:33.348526  978532 cli_runner.go:164] Run: docker container inspect scheduled-stop-693017 --format={{.State.Status}}
	I0408 19:16:33.378084  978532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33765 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/scheduled-stop-693017/id_rsa Username:docker}
	I0408 19:16:33.378806  978532 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 19:16:33.378815  978532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 19:16:33.378878  978532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-693017
	I0408 19:16:33.400124  978532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33765 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/scheduled-stop-693017/id_rsa Username:docker}
	I0408 19:16:33.556128  978532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:16:33.556372  978532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0408 19:16:33.589339  978532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 19:16:33.640824  978532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:16:34.004026  978532 start.go:946] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0408 19:16:34.006252  978532 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:16:34.006325  978532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:16:34.291085  978532 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0408 19:16:34.288957  978532 api_server.go:72] duration metric: took 984.08375ms to wait for apiserver process to appear ...
	I0408 19:16:34.293028  978532 api_server.go:88] waiting for apiserver healthz status ...
	I0408 19:16:34.293055  978532 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0408 19:16:34.293371  978532 addons.go:505] duration metric: took 988.240558ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0408 19:16:34.301841  978532 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0408 19:16:34.303291  978532 api_server.go:141] control plane version: v1.29.3
	I0408 19:16:34.303307  978532 api_server.go:131] duration metric: took 10.270161ms to wait for apiserver health ...
	I0408 19:16:34.303314  978532 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 19:16:34.309526  978532 system_pods.go:59] 5 kube-system pods found
	I0408 19:16:34.309544  978532 system_pods.go:61] "etcd-scheduled-stop-693017" [b1f26c91-3bdd-428d-b0d2-ef7688cd3247] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 19:16:34.309551  978532 system_pods.go:61] "kube-apiserver-scheduled-stop-693017" [a23eec4b-2ca1-4fcf-ba57-b6bdfb9fe192] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 19:16:34.309558  978532 system_pods.go:61] "kube-controller-manager-scheduled-stop-693017" [7f56237f-5c53-4eb6-84b8-493d8d2a75e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 19:16:34.309564  978532 system_pods.go:61] "kube-scheduler-scheduled-stop-693017" [b0bde29f-dabf-485e-b047-c85fb83f6359] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 19:16:34.309569  978532 system_pods.go:61] "storage-provisioner" [15a6adf0-4688-43e1-8c1d-70f48981d531] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0408 19:16:34.309575  978532 system_pods.go:74] duration metric: took 6.255586ms to wait for pod list to return data ...
	I0408 19:16:34.309584  978532 kubeadm.go:576] duration metric: took 1.004715951s to wait for: map[apiserver:true system_pods:true]
	I0408 19:16:34.309595  978532 node_conditions.go:102] verifying NodePressure condition ...
	I0408 19:16:34.312644  978532 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0408 19:16:34.312661  978532 node_conditions.go:123] node cpu capacity is 2
	I0408 19:16:34.312670  978532 node_conditions.go:105] duration metric: took 3.071216ms to run NodePressure ...
	I0408 19:16:34.312681  978532 start.go:240] waiting for startup goroutines ...
	I0408 19:16:34.508527  978532 kapi.go:248] "coredns" deployment in "kube-system" namespace and "scheduled-stop-693017" context rescaled to 1 replicas
	I0408 19:16:34.508548  978532 start.go:245] waiting for cluster config update ...
	I0408 19:16:34.508558  978532 start.go:254] writing updated cluster config ...
	I0408 19:16:34.508825  978532 ssh_runner.go:195] Run: rm -f paused
	I0408 19:16:34.571096  978532 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 19:16:34.573670  978532 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-693017" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3947477b12723       121d70d9a3805       10 seconds ago      Running             kube-controller-manager   0                   f42dcff1ecbf5       kube-controller-manager-scheduled-stop-693017
	2e39dd416a610       2581114f5709d       10 seconds ago      Running             kube-apiserver            0                   beeb79cd25522       kube-apiserver-scheduled-stop-693017
	976753fc8ac38       4b51f9f6bc9b9       10 seconds ago      Running             kube-scheduler            0                   21d8e6355446c       kube-scheduler-scheduled-stop-693017
	50d2e6992264a       014faa467e297       10 seconds ago      Running             etcd                      0                   32049cf88aba8       etcd-scheduled-stop-693017
	
	
	==> containerd <==
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.124263804Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21d8e6355446c804fba86981415424594a87035ad5390a88932d0dceaacd22ee pid=1255 runtime=io.containerd.runc.v2
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.130908762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.131079531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.131121261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.131332858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f42dcff1ecbf59c6f7502b7f3c741c8782a1abe0bb6b1357acd56f5d433b159f pid=1247 runtime=io.containerd.runc.v2
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.239883115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-scheduled-stop-693017,Uid:3c3e294fa3e012a4b1b7ce1409c1b006,Namespace:kube-system,Attempt:0,} returns sandbox id \"32049cf88aba8e8afbea375c6d102aac18dc7619135eb3ef1f43a7c50c41932f\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.254302068Z" level=info msg="CreateContainer within sandbox \"32049cf88aba8e8afbea375c6d102aac18dc7619135eb3ef1f43a7c50c41932f\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.256337922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-scheduled-stop-693017,Uid:1a01e4bc0697f1a8fb0f3595d17eb166,Namespace:kube-system,Attempt:0,} returns sandbox id \"21d8e6355446c804fba86981415424594a87035ad5390a88932d0dceaacd22ee\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.257619727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-scheduled-stop-693017,Uid:a96982c8181cdc35014c99214799937e,Namespace:kube-system,Attempt:0,} returns sandbox id \"beeb79cd25522a542918428d76a93d33faad9ff303bcd3e78de70018931c4a28\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.265309236Z" level=info msg="CreateContainer within sandbox \"21d8e6355446c804fba86981415424594a87035ad5390a88932d0dceaacd22ee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.268328260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-scheduled-stop-693017,Uid:b13e580ac2076340eaebf34020c64f0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f42dcff1ecbf59c6f7502b7f3c741c8782a1abe0bb6b1357acd56f5d433b159f\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.270969825Z" level=info msg="CreateContainer within sandbox \"beeb79cd25522a542918428d76a93d33faad9ff303bcd3e78de70018931c4a28\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.274670998Z" level=info msg="CreateContainer within sandbox \"f42dcff1ecbf59c6f7502b7f3c741c8782a1abe0bb6b1357acd56f5d433b159f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.280988375Z" level=info msg="CreateContainer within sandbox \"32049cf88aba8e8afbea375c6d102aac18dc7619135eb3ef1f43a7c50c41932f\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"50d2e6992264a5318f965792d66a41b6bd63ce515579572f2fb495ae85e08315\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.281751867Z" level=info msg="StartContainer for \"50d2e6992264a5318f965792d66a41b6bd63ce515579572f2fb495ae85e08315\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.291277506Z" level=info msg="CreateContainer within sandbox \"21d8e6355446c804fba86981415424594a87035ad5390a88932d0dceaacd22ee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"976753fc8ac38008476932c4de58c98e60a674ad62714535ce2b7402fe62dc1d\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.292089054Z" level=info msg="StartContainer for \"976753fc8ac38008476932c4de58c98e60a674ad62714535ce2b7402fe62dc1d\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.304560586Z" level=info msg="CreateContainer within sandbox \"beeb79cd25522a542918428d76a93d33faad9ff303bcd3e78de70018931c4a28\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2e39dd416a610808e4fdb99cc942c35a424cdbd560186fe1426c3f4dbf6bd53d\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.305254345Z" level=info msg="StartContainer for \"2e39dd416a610808e4fdb99cc942c35a424cdbd560186fe1426c3f4dbf6bd53d\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.328080270Z" level=info msg="CreateContainer within sandbox \"f42dcff1ecbf59c6f7502b7f3c741c8782a1abe0bb6b1357acd56f5d433b159f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3947477b1272337eefc29c092b250a16165085baee989aebba2e12f21746fa38\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.331257428Z" level=info msg="StartContainer for \"3947477b1272337eefc29c092b250a16165085baee989aebba2e12f21746fa38\""
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.345489046Z" level=info msg="StartContainer for \"50d2e6992264a5318f965792d66a41b6bd63ce515579572f2fb495ae85e08315\" returns successfully"
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.470038688Z" level=info msg="StartContainer for \"976753fc8ac38008476932c4de58c98e60a674ad62714535ce2b7402fe62dc1d\" returns successfully"
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.480149453Z" level=info msg="StartContainer for \"2e39dd416a610808e4fdb99cc942c35a424cdbd560186fe1426c3f4dbf6bd53d\" returns successfully"
	Apr 08 19:16:25 scheduled-stop-693017 containerd[773]: time="2024-04-08T19:16:25.487841908Z" level=info msg="StartContainer for \"3947477b1272337eefc29c092b250a16165085baee989aebba2e12f21746fa38\" returns successfully"
	
	
	==> describe nodes <==
	Name:               scheduled-stop-693017
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-693017
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021
	                    minikube.k8s.io/name=scheduled-stop-693017
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T19_16_33_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 19:16:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-693017
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 19:16:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 19:16:32 +0000   Mon, 08 Apr 2024 19:16:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 19:16:32 +0000   Mon, 08 Apr 2024 19:16:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 19:16:32 +0000   Mon, 08 Apr 2024 19:16:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 08 Apr 2024 19:16:32 +0000   Mon, 08 Apr 2024 19:16:32 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-693017
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 3169fbc43ee54aa28f80e93d6693e327
	  System UUID:                7d52eb3e-3e48-48a7-a852-03baa0939328
	  Boot ID:                    b4b2abab-4517-475f-9e8e-63d816803507
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-693017                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6s
	  kube-system                 kube-apiserver-scheduled-stop-693017             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-693017    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-scheduled-stop-693017             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 12s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet  Node scheduled-stop-693017 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet  Node scheduled-stop-693017 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x7 over 12s)  kubelet  Node scheduled-stop-693017 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 4s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s                 kubelet  Node scheduled-stop-693017 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet  Node scheduled-stop-693017 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet  Node scheduled-stop-693017 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4s                 kubelet  Node scheduled-stop-693017 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.001042] FS-Cache: O-key=[8] '36d4c90000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000931] FS-Cache: N-cookie d=000000002ce2b4d1{9p.inode} n=000000004e1ae318
	[  +0.001021] FS-Cache: N-key=[8] '36d4c90000000000'
	[  +0.003218] FS-Cache: Duplicate cookie detected
	[  +0.000732] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000935] FS-Cache: O-cookie d=000000002ce2b4d1{9p.inode} n=000000006d0c49e0
	[  +0.001134] FS-Cache: O-key=[8] '36d4c90000000000'
	[  +0.000710] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000934] FS-Cache: N-cookie d=000000002ce2b4d1{9p.inode} n=00000000c36a3df6
	[  +0.001046] FS-Cache: N-key=[8] '36d4c90000000000'
	[  +2.726305] FS-Cache: Duplicate cookie detected
	[  +0.000790] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001064] FS-Cache: O-cookie d=000000002ce2b4d1{9p.inode} n=0000000060bf15ac
	[  +0.001248] FS-Cache: O-key=[8] '35d4c90000000000'
	[  +0.000713] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=000000002ce2b4d1{9p.inode} n=0000000099083bd9
	[  +0.001194] FS-Cache: N-key=[8] '35d4c90000000000'
	[  +0.329667] FS-Cache: Duplicate cookie detected
	[  +0.000738] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000974] FS-Cache: O-cookie d=000000002ce2b4d1{9p.inode} n=0000000083bcb924
	[  +0.001047] FS-Cache: O-key=[8] '3bd4c90000000000'
	[  +0.000776] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000942] FS-Cache: N-cookie d=000000002ce2b4d1{9p.inode} n=00000000cd19031b
	[  +0.001036] FS-Cache: N-key=[8] '3bd4c90000000000'
	
	
	==> etcd [50d2e6992264a5318f965792d66a41b6bd63ce515579572f2fb495ae85e08315] <==
	{"level":"info","ts":"2024-04-08T19:16:25.389614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-04-08T19:16:25.389754Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-04-08T19:16:25.391067Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-08T19:16:25.391261Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-08T19:16:25.391283Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-08T19:16:25.392052Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-04-08T19:16:25.392067Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-04-08T19:16:26.170125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-08T19:16:26.170355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-08T19:16:26.170473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2024-04-08T19:16:26.170607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2024-04-08T19:16:26.170766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-04-08T19:16:26.170867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2024-04-08T19:16:26.171024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-04-08T19:16:26.174258Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:scheduled-stop-693017 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-08T19:16:26.174581Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T19:16:26.175096Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T19:16:26.179671Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T19:16:26.179945Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T19:16:26.179994Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T19:16:26.180424Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-04-08T19:16:26.187376Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-08T19:16:26.187423Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T19:16:26.218952Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T19:16:26.219237Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:16:36 up  3:59,  0 users,  load average: 2.78, 2.11, 2.38
	Linux scheduled-stop-693017 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [2e39dd416a610808e4fdb99cc942c35a424cdbd560186fe1426c3f4dbf6bd53d] <==
	I0408 19:16:29.420802       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0408 19:16:29.420808       1 cache.go:39] Caches are synced for autoregister controller
	I0408 19:16:29.477686       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0408 19:16:29.478188       1 shared_informer.go:318] Caches are synced for configmaps
	I0408 19:16:29.478494       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0408 19:16:29.478597       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0408 19:16:29.478986       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0408 19:16:29.497656       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0408 19:16:29.500375       1 controller.go:624] quota admission added evaluator for: namespaces
	I0408 19:16:29.515897       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0408 19:16:29.560441       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0408 19:16:29.764374       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 19:16:30.320464       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0408 19:16:30.325459       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0408 19:16:30.325480       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 19:16:30.862187       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 19:16:30.912135       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0408 19:16:30.985631       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0408 19:16:30.992452       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0408 19:16:30.993552       1 controller.go:624] quota admission added evaluator for: endpoints
	I0408 19:16:30.999896       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0408 19:16:31.458583       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0408 19:16:32.564850       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0408 19:16:32.576221       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0408 19:16:32.594451       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [3947477b1272337eefc29c092b250a16165085baee989aebba2e12f21746fa38] <==
	I0408 19:16:32.448445       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0408 19:16:32.448704       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0408 19:16:32.448837       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0408 19:16:32.448899       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0408 19:16:32.599980       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0408 19:16:32.600062       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0408 19:16:32.600070       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0408 19:16:32.747369       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0408 19:16:32.747423       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0408 19:16:32.747432       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0408 19:16:32.897369       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0408 19:16:32.897413       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0408 19:16:32.897452       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0408 19:16:32.897467       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0408 19:16:33.046353       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0408 19:16:33.046415       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0408 19:16:33.046422       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0408 19:16:33.350564       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0408 19:16:33.350661       1 horizontal.go:200] "Starting HPA controller"
	I0408 19:16:33.350669       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0408 19:16:33.506143       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0408 19:16:33.506227       1 stateful_set.go:161] "Starting stateful set controller"
	I0408 19:16:33.506235       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0408 19:16:33.649985       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0408 19:16:33.650104       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	
	
	==> kube-scheduler [976753fc8ac38008476932c4de58c98e60a674ad62714535ce2b7402fe62dc1d] <==
	W0408 19:16:29.504246       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 19:16:29.504317       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 19:16:29.508504       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 19:16:29.508542       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 19:16:30.338919       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 19:16:30.338958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 19:16:30.423717       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 19:16:30.423942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 19:16:30.438747       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 19:16:30.438785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0408 19:16:30.453694       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 19:16:30.453733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 19:16:30.484951       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 19:16:30.485158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0408 19:16:30.545622       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 19:16:30.545660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 19:16:30.600505       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 19:16:30.600558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0408 19:16:30.606102       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 19:16:30.606157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0408 19:16:30.668853       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 19:16:30.668895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 19:16:30.821345       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 19:16:30.821394       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0408 19:16:32.788860       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 19:16:32 scheduled-stop-693017 kubelet[1545]: I0408 19:16:32.978916    1545 topology_manager.go:215] "Topology Admit Handler" podUID="a96982c8181cdc35014c99214799937e" podNamespace="kube-system" podName="kube-apiserver-scheduled-stop-693017"
	Apr 08 19:16:32 scheduled-stop-693017 kubelet[1545]: I0408 19:16:32.978983    1545 topology_manager.go:215] "Topology Admit Handler" podUID="b13e580ac2076340eaebf34020c64f0a" podNamespace="kube-system" podName="kube-controller-manager-scheduled-stop-693017"
	Apr 08 19:16:32 scheduled-stop-693017 kubelet[1545]: I0408 19:16:32.979051    1545 topology_manager.go:215] "Topology Admit Handler" podUID="1a01e4bc0697f1a8fb0f3595d17eb166" podNamespace="kube-system" podName="kube-scheduler-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: E0408 19:16:33.007483    1545 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"etcd-scheduled-stop-693017\" already exists" pod="kube-system/etcd-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074418    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b13e580ac2076340eaebf34020c64f0a-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-693017\" (UID: \"b13e580ac2076340eaebf34020c64f0a\") " pod="kube-system/kube-controller-manager-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074484    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3c3e294fa3e012a4b1b7ce1409c1b006-etcd-certs\") pod \"etcd-scheduled-stop-693017\" (UID: \"3c3e294fa3e012a4b1b7ce1409c1b006\") " pod="kube-system/etcd-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074518    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b13e580ac2076340eaebf34020c64f0a-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-693017\" (UID: \"b13e580ac2076340eaebf34020c64f0a\") " pod="kube-system/kube-controller-manager-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074553    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b13e580ac2076340eaebf34020c64f0a-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-693017\" (UID: \"b13e580ac2076340eaebf34020c64f0a\") " pod="kube-system/kube-controller-manager-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074578    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b13e580ac2076340eaebf34020c64f0a-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-693017\" (UID: \"b13e580ac2076340eaebf34020c64f0a\") " pod="kube-system/kube-controller-manager-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074601    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/3c3e294fa3e012a4b1b7ce1409c1b006-etcd-data\") pod \"etcd-scheduled-stop-693017\" (UID: \"3c3e294fa3e012a4b1b7ce1409c1b006\") " pod="kube-system/etcd-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074636    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a96982c8181cdc35014c99214799937e-ca-certs\") pod \"kube-apiserver-scheduled-stop-693017\" (UID: \"a96982c8181cdc35014c99214799937e\") " pod="kube-system/kube-apiserver-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074663    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a96982c8181cdc35014c99214799937e-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-693017\" (UID: \"a96982c8181cdc35014c99214799937e\") " pod="kube-system/kube-apiserver-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074699    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b13e580ac2076340eaebf34020c64f0a-ca-certs\") pod \"kube-controller-manager-scheduled-stop-693017\" (UID: \"b13e580ac2076340eaebf34020c64f0a\") " pod="kube-system/kube-controller-manager-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074724    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b13e580ac2076340eaebf34020c64f0a-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-693017\" (UID: \"b13e580ac2076340eaebf34020c64f0a\") " pod="kube-system/kube-controller-manager-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074750    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b13e580ac2076340eaebf34020c64f0a-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-693017\" (UID: \"b13e580ac2076340eaebf34020c64f0a\") " pod="kube-system/kube-controller-manager-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074785    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a01e4bc0697f1a8fb0f3595d17eb166-kubeconfig\") pod \"kube-scheduler-scheduled-stop-693017\" (UID: \"1a01e4bc0697f1a8fb0f3595d17eb166\") " pod="kube-system/kube-scheduler-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074821    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a96982c8181cdc35014c99214799937e-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-693017\" (UID: \"a96982c8181cdc35014c99214799937e\") " pod="kube-system/kube-apiserver-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074854    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a96982c8181cdc35014c99214799937e-k8s-certs\") pod \"kube-apiserver-scheduled-stop-693017\" (UID: \"a96982c8181cdc35014c99214799937e\") " pod="kube-system/kube-apiserver-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.074885    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a96982c8181cdc35014c99214799937e-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-693017\" (UID: \"a96982c8181cdc35014c99214799937e\") " pod="kube-system/kube-apiserver-scheduled-stop-693017"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.642987    1545 apiserver.go:52] "Watching apiserver"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.672813    1545 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.870943    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-693017" podStartSLOduration=1.870882646 podStartE2EDuration="1.870882646s" podCreationTimestamp="2024-04-08 19:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-08 19:16:33.843924921 +0000 UTC m=+1.317549397" watchObservedRunningTime="2024-04-08 19:16:33.870882646 +0000 UTC m=+1.344507114"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.884932    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-693017" podStartSLOduration=3.884885626 podStartE2EDuration="3.884885626s" podCreationTimestamp="2024-04-08 19:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-08 19:16:33.871188017 +0000 UTC m=+1.344812517" watchObservedRunningTime="2024-04-08 19:16:33.884885626 +0000 UTC m=+1.358510102"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.885092    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-693017" podStartSLOduration=0.885069441 podStartE2EDuration="885.069441ms" podCreationTimestamp="2024-04-08 19:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-08 19:16:33.882929279 +0000 UTC m=+1.356553771" watchObservedRunningTime="2024-04-08 19:16:33.885069441 +0000 UTC m=+1.358693917"
	Apr 08 19:16:33 scheduled-stop-693017 kubelet[1545]: I0408 19:16:33.908542    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-693017" podStartSLOduration=1.908481756 podStartE2EDuration="1.908481756s" podCreationTimestamp="2024-04-08 19:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-08 19:16:33.893736911 +0000 UTC m=+1.367361387" watchObservedRunningTime="2024-04-08 19:16:33.908481756 +0000 UTC m=+1.382106232"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-693017 -n scheduled-stop-693017
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-693017 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-693017 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-693017 describe pod storage-provisioner: exit status 1 (88.709143ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-693017 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-693017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-693017
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-693017: (1.913749339s)
--- FAIL: TestScheduledStopUnix (37.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (380.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-540675 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-540675 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m16.770169095s)

                                                
                                                
-- stdout --
	* [old-k8s-version-540675] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-540675" primary control-plane node in "old-k8s-version-540675" cluster
	* Pulling base image v0.0.43-1712593525-18585 ...
	* Restarting existing docker container for "old-k8s-version-540675" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-540675 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, dashboard, metrics-server
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:29:45.831456 1040091 out.go:291] Setting OutFile to fd 1 ...
	I0408 19:29:45.831593 1040091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:29:45.831602 1040091 out.go:304] Setting ErrFile to fd 2...
	I0408 19:29:45.831608 1040091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:29:45.831859 1040091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 19:29:45.832211 1040091 out.go:298] Setting JSON to false
	I0408 19:29:45.833475 1040091 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15130,"bootTime":1712589456,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0408 19:29:45.833554 1040091 start.go:139] virtualization:  
	I0408 19:29:45.836430 1040091 out.go:177] * [old-k8s-version-540675] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0408 19:29:45.838882 1040091 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 19:29:45.838948 1040091 notify.go:220] Checking for updates...
	I0408 19:29:45.842868 1040091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:29:45.845235 1040091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 19:29:45.847368 1040091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	I0408 19:29:45.849400 1040091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0408 19:29:45.851438 1040091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:29:45.853710 1040091 config.go:182] Loaded profile config "old-k8s-version-540675": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0408 19:29:45.856032 1040091 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 19:29:45.857972 1040091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 19:29:45.878458 1040091 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0408 19:29:45.878588 1040091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 19:29:45.964945 1040091 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-08 19:29:45.953717496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 19:29:45.965073 1040091 docker.go:295] overlay module found
	I0408 19:29:45.969003 1040091 out.go:177] * Using the docker driver based on existing profile
	I0408 19:29:45.970905 1040091 start.go:297] selected driver: docker
	I0408 19:29:45.970925 1040091 start.go:901] validating driver "docker" against &{Name:old-k8s-version-540675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-540675 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:29:45.971046 1040091 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:29:45.971742 1040091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 19:29:46.028754 1040091 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-08 19:29:46.019089313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 19:29:46.029119 1040091 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 19:29:46.029188 1040091 cni.go:84] Creating CNI manager for ""
	I0408 19:29:46.029205 1040091 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 19:29:46.029256 1040091 start.go:340] cluster config:
	{Name:old-k8s-version-540675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-540675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:29:46.031573 1040091 out.go:177] * Starting "old-k8s-version-540675" primary control-plane node in "old-k8s-version-540675" cluster
	I0408 19:29:46.033787 1040091 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0408 19:29:46.035659 1040091 out.go:177] * Pulling base image v0.0.43-1712593525-18585 ...
	I0408 19:29:46.037463 1040091 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0408 19:29:46.037520 1040091 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0408 19:29:46.037533 1040091 cache.go:56] Caching tarball of preloaded images
	I0408 19:29:46.037569 1040091 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local docker daemon
	I0408 19:29:46.037636 1040091 preload.go:173] Found /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 19:29:46.037647 1040091 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0408 19:29:46.037769 1040091 profile.go:143] Saving config to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/config.json ...
	I0408 19:29:46.066733 1040091 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local docker daemon, skipping pull
	I0408 19:29:46.066759 1040091 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd exists in daemon, skipping load
	I0408 19:29:46.066781 1040091 cache.go:194] Successfully downloaded all kic artifacts
	I0408 19:29:46.066878 1040091 start.go:360] acquireMachinesLock for old-k8s-version-540675: {Name:mk002d4e573a493432142c82487f5dd9de0a1d03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:29:46.067015 1040091 start.go:364] duration metric: took 91.714µs to acquireMachinesLock for "old-k8s-version-540675"
	I0408 19:29:46.067039 1040091 start.go:96] Skipping create...Using existing machine configuration
	I0408 19:29:46.067159 1040091 fix.go:54] fixHost starting: 
	I0408 19:29:46.067597 1040091 cli_runner.go:164] Run: docker container inspect old-k8s-version-540675 --format={{.State.Status}}
	I0408 19:29:46.083582 1040091 fix.go:112] recreateIfNeeded on old-k8s-version-540675: state=Stopped err=<nil>
	W0408 19:29:46.083652 1040091 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 19:29:46.085942 1040091 out.go:177] * Restarting existing docker container for "old-k8s-version-540675" ...
	I0408 19:29:46.088023 1040091 cli_runner.go:164] Run: docker start old-k8s-version-540675
	I0408 19:29:46.391188 1040091 cli_runner.go:164] Run: docker container inspect old-k8s-version-540675 --format={{.State.Status}}
	I0408 19:29:46.415867 1040091 kic.go:430] container "old-k8s-version-540675" state is running.
	I0408 19:29:46.416622 1040091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-540675
	I0408 19:29:46.438775 1040091 profile.go:143] Saving config to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/config.json ...
	I0408 19:29:46.438997 1040091 machine.go:94] provisionDockerMachine start ...
	I0408 19:29:46.439061 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:46.459252 1040091 main.go:141] libmachine: Using SSH client type: native
	I0408 19:29:46.459620 1040091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33860 <nil> <nil>}
	I0408 19:29:46.459641 1040091 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 19:29:46.460439 1040091 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0408 19:29:49.614121 1040091 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-540675
	
	I0408 19:29:49.614147 1040091 ubuntu.go:169] provisioning hostname "old-k8s-version-540675"
	I0408 19:29:49.614219 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:49.640505 1040091 main.go:141] libmachine: Using SSH client type: native
	I0408 19:29:49.640762 1040091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33860 <nil> <nil>}
	I0408 19:29:49.640781 1040091 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-540675 && echo "old-k8s-version-540675" | sudo tee /etc/hostname
	I0408 19:29:49.821423 1040091 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-540675
	
	I0408 19:29:49.821549 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:49.841931 1040091 main.go:141] libmachine: Using SSH client type: native
	I0408 19:29:49.842297 1040091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33860 <nil> <nil>}
	I0408 19:29:49.842326 1040091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-540675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-540675/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-540675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:29:49.994282 1040091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:29:49.994320 1040091 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18585-838483/.minikube CaCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18585-838483/.minikube}
	I0408 19:29:49.994339 1040091 ubuntu.go:177] setting up certificates
	I0408 19:29:49.994355 1040091 provision.go:84] configureAuth start
	I0408 19:29:49.994429 1040091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-540675
	I0408 19:29:50.018348 1040091 provision.go:143] copyHostCerts
	I0408 19:29:50.018416 1040091 exec_runner.go:144] found /home/jenkins/minikube-integration/18585-838483/.minikube/key.pem, removing ...
	I0408 19:29:50.018433 1040091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18585-838483/.minikube/key.pem
	I0408 19:29:50.018524 1040091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/key.pem (1675 bytes)
	I0408 19:29:50.018647 1040091 exec_runner.go:144] found /home/jenkins/minikube-integration/18585-838483/.minikube/ca.pem, removing ...
	I0408 19:29:50.018659 1040091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18585-838483/.minikube/ca.pem
	I0408 19:29:50.018693 1040091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/ca.pem (1082 bytes)
	I0408 19:29:50.018819 1040091 exec_runner.go:144] found /home/jenkins/minikube-integration/18585-838483/.minikube/cert.pem, removing ...
	I0408 19:29:50.018832 1040091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18585-838483/.minikube/cert.pem
	I0408 19:29:50.018866 1040091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/cert.pem (1123 bytes)
	I0408 19:29:50.018948 1040091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-540675 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-540675]
	I0408 19:29:50.289427 1040091 provision.go:177] copyRemoteCerts
	I0408 19:29:50.289554 1040091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:29:50.289651 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:50.315130 1040091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33860 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/old-k8s-version-540675/id_rsa Username:docker}
	I0408 19:29:50.416751 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:29:50.457347 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 19:29:50.491509 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 19:29:50.533057 1040091 provision.go:87] duration metric: took 538.687463ms to configureAuth
	I0408 19:29:50.533081 1040091 ubuntu.go:193] setting minikube options for container-runtime
	I0408 19:29:50.533274 1040091 config.go:182] Loaded profile config "old-k8s-version-540675": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0408 19:29:50.533282 1040091 machine.go:97] duration metric: took 4.094277721s to provisionDockerMachine
	I0408 19:29:50.533290 1040091 start.go:293] postStartSetup for "old-k8s-version-540675" (driver="docker")
	I0408 19:29:50.533301 1040091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:29:50.533348 1040091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:29:50.533389 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:50.552105 1040091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33860 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/old-k8s-version-540675/id_rsa Username:docker}
	I0408 19:29:50.659788 1040091 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:29:50.663370 1040091 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0408 19:29:50.663404 1040091 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0408 19:29:50.663414 1040091 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0408 19:29:50.663421 1040091 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0408 19:29:50.663431 1040091 filesync.go:126] Scanning /home/jenkins/minikube-integration/18585-838483/.minikube/addons for local assets ...
	I0408 19:29:50.663488 1040091 filesync.go:126] Scanning /home/jenkins/minikube-integration/18585-838483/.minikube/files for local assets ...
	I0408 19:29:50.663585 1040091 filesync.go:149] local asset: /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem -> 8439002.pem in /etc/ssl/certs
	I0408 19:29:50.663693 1040091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:29:50.672651 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem --> /etc/ssl/certs/8439002.pem (1708 bytes)
	I0408 19:29:50.697913 1040091 start.go:296] duration metric: took 164.606738ms for postStartSetup
	I0408 19:29:50.698066 1040091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 19:29:50.698110 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:50.713568 1040091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33860 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/old-k8s-version-540675/id_rsa Username:docker}
	I0408 19:29:50.814905 1040091 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0408 19:29:50.819226 1040091 fix.go:56] duration metric: took 4.75206049s for fixHost
	I0408 19:29:50.819252 1040091 start.go:83] releasing machines lock for "old-k8s-version-540675", held for 4.752226066s
	I0408 19:29:50.819321 1040091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-540675
	I0408 19:29:50.836962 1040091 ssh_runner.go:195] Run: cat /version.json
	I0408 19:29:50.837018 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:50.837281 1040091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:29:50.837370 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:50.855264 1040091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33860 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/old-k8s-version-540675/id_rsa Username:docker}
	I0408 19:29:50.856995 1040091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33860 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/old-k8s-version-540675/id_rsa Username:docker}
	I0408 19:29:51.065685 1040091 ssh_runner.go:195] Run: systemctl --version
	I0408 19:29:51.070261 1040091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 19:29:51.075096 1040091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0408 19:29:51.094733 1040091 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0408 19:29:51.094819 1040091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:29:51.105261 1040091 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 19:29:51.105289 1040091 start.go:494] detecting cgroup driver to use...
	I0408 19:29:51.105323 1040091 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0408 19:29:51.105385 1040091 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 19:29:51.121370 1040091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 19:29:51.135301 1040091 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:29:51.135392 1040091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:29:51.148398 1040091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:29:51.160014 1040091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:29:51.268113 1040091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:29:51.379703 1040091 docker.go:233] disabling docker service ...
	I0408 19:29:51.379833 1040091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:29:51.393716 1040091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:29:51.407354 1040091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:29:51.505641 1040091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:29:51.591098 1040091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:29:51.603265 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:29:51.620126 1040091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0408 19:29:51.630436 1040091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 19:29:51.640342 1040091 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 19:29:51.640484 1040091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 19:29:51.650102 1040091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 19:29:51.660052 1040091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 19:29:51.669508 1040091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 19:29:51.679110 1040091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:29:51.688706 1040091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 19:29:51.699572 1040091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:29:51.710323 1040091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:29:51.719469 1040091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:29:51.812863 1040091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 19:29:51.977411 1040091 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0408 19:29:51.977491 1040091 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0408 19:29:51.981728 1040091 start.go:562] Will wait 60s for crictl version
	I0408 19:29:51.981802 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:29:51.985627 1040091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:29:52.025330 1040091 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0408 19:29:52.025408 1040091 ssh_runner.go:195] Run: containerd --version
	I0408 19:29:52.046852 1040091 ssh_runner.go:195] Run: containerd --version
	I0408 19:29:52.080623 1040091 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0408 19:29:52.082391 1040091 cli_runner.go:164] Run: docker network inspect old-k8s-version-540675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0408 19:29:52.096225 1040091 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0408 19:29:52.099911 1040091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:29:52.111077 1040091 kubeadm.go:877] updating cluster {Name:old-k8s-version-540675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-540675 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:29:52.111205 1040091 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0408 19:29:52.111265 1040091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:29:52.149865 1040091 containerd.go:627] all images are preloaded for containerd runtime.
	I0408 19:29:52.149890 1040091 containerd.go:534] Images already preloaded, skipping extraction
	I0408 19:29:52.149950 1040091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:29:52.185988 1040091 containerd.go:627] all images are preloaded for containerd runtime.
	I0408 19:29:52.186035 1040091 cache_images.go:84] Images are preloaded, skipping loading
	I0408 19:29:52.186044 1040091 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0408 19:29:52.186169 1040091 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-540675 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-540675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:29:52.186239 1040091 ssh_runner.go:195] Run: sudo crictl info
	I0408 19:29:52.225438 1040091 cni.go:84] Creating CNI manager for ""
	I0408 19:29:52.225503 1040091 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 19:29:52.225529 1040091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 19:29:52.225561 1040091 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-540675 NodeName:old-k8s-version-540675 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 19:29:52.225758 1040091 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-540675"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:29:52.225865 1040091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 19:29:52.234890 1040091 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:29:52.235011 1040091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:29:52.243719 1040091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0408 19:29:52.261767 1040091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:29:52.282470 1040091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0408 19:29:52.300994 1040091 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0408 19:29:52.304711 1040091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:29:52.315632 1040091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:29:52.400767 1040091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:29:52.417425 1040091 certs.go:68] Setting up /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675 for IP: 192.168.76.2
	I0408 19:29:52.417447 1040091 certs.go:194] generating shared ca certs ...
	I0408 19:29:52.417462 1040091 certs.go:226] acquiring lock for ca certs: {Name:mkee58842a3256e0a530a93e9e38afd9941f0741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:29:52.417598 1040091 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18585-838483/.minikube/ca.key
	I0408 19:29:52.417660 1040091 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.key
	I0408 19:29:52.417671 1040091 certs.go:256] generating profile certs ...
	I0408 19:29:52.417758 1040091 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.key
	I0408 19:29:52.417829 1040091 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/apiserver.key.cee5fe35
	I0408 19:29:52.417872 1040091 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/proxy-client.key
	I0408 19:29:52.417995 1040091 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/843900.pem (1338 bytes)
	W0408 19:29:52.418086 1040091 certs.go:480] ignoring /home/jenkins/minikube-integration/18585-838483/.minikube/certs/843900_empty.pem, impossibly tiny 0 bytes
	I0408 19:29:52.418101 1040091 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:29:52.418125 1040091 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:29:52.418151 1040091 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:29:52.418174 1040091 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem (1675 bytes)
	I0408 19:29:52.418226 1040091 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem (1708 bytes)
	I0408 19:29:52.418850 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:29:52.447231 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:29:52.471926 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:29:52.496706 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 19:29:52.521662 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 19:29:52.549066 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 19:29:52.580215 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:29:52.607275 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 19:29:52.632772 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem --> /usr/share/ca-certificates/8439002.pem (1708 bytes)
	I0408 19:29:52.657915 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:29:52.682185 1040091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/certs/843900.pem --> /usr/share/ca-certificates/843900.pem (1338 bytes)
	I0408 19:29:52.706537 1040091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 19:29:52.724763 1040091 ssh_runner.go:195] Run: openssl version
	I0408 19:29:52.730187 1040091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8439002.pem && ln -fs /usr/share/ca-certificates/8439002.pem /etc/ssl/certs/8439002.pem"
	I0408 19:29:52.739758 1040091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8439002.pem
	I0408 19:29:52.743561 1040091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:50 /usr/share/ca-certificates/8439002.pem
	I0408 19:29:52.743656 1040091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8439002.pem
	I0408 19:29:52.750948 1040091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8439002.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:29:52.760076 1040091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:29:52.769353 1040091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:29:52.773021 1040091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:29:52.773088 1040091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:29:52.780256 1040091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:29:52.788854 1040091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/843900.pem && ln -fs /usr/share/ca-certificates/843900.pem /etc/ssl/certs/843900.pem"
	I0408 19:29:52.798409 1040091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/843900.pem
	I0408 19:29:52.801748 1040091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:50 /usr/share/ca-certificates/843900.pem
	I0408 19:29:52.801839 1040091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/843900.pem
	I0408 19:29:52.808894 1040091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/843900.pem /etc/ssl/certs/51391683.0"
	I0408 19:29:52.817771 1040091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:29:52.821298 1040091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 19:29:52.828115 1040091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 19:29:52.834977 1040091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 19:29:52.841867 1040091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 19:29:52.848906 1040091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 19:29:52.855872 1040091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 19:29:52.862786 1040091 kubeadm.go:391] StartCluster: {Name:old-k8s-version-540675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-540675 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:29:52.862944 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0408 19:29:52.863029 1040091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:29:52.900316 1040091 cri.go:89] found id: "3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892"
	I0408 19:29:52.900398 1040091 cri.go:89] found id: "0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63"
	I0408 19:29:52.900411 1040091 cri.go:89] found id: "63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070"
	I0408 19:29:52.900416 1040091 cri.go:89] found id: "544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356"
	I0408 19:29:52.900420 1040091 cri.go:89] found id: "b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061"
	I0408 19:29:52.900423 1040091 cri.go:89] found id: "a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f"
	I0408 19:29:52.900427 1040091 cri.go:89] found id: "db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a"
	I0408 19:29:52.900430 1040091 cri.go:89] found id: "172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58"
	I0408 19:29:52.900433 1040091 cri.go:89] found id: ""
	I0408 19:29:52.900507 1040091 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0408 19:29:52.913124 1040091 cri.go:116] JSON = null
	W0408 19:29:52.913175 1040091 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0408 19:29:52.913240 1040091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 19:29:52.922695 1040091 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 19:29:52.922757 1040091 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 19:29:52.922771 1040091 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 19:29:52.922846 1040091 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 19:29:52.932920 1040091 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 19:29:52.933527 1040091 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-540675" does not appear in /home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 19:29:52.933819 1040091 kubeconfig.go:62] /home/jenkins/minikube-integration/18585-838483/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-540675" cluster setting kubeconfig missing "old-k8s-version-540675" context setting]
	I0408 19:29:52.934369 1040091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/kubeconfig: {Name:mk2667c6d217e28cc639f1cedf47734a14602005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:29:52.935752 1040091 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 19:29:52.944752 1040091 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0408 19:29:52.944785 1040091 kubeadm.go:591] duration metric: took 22.008383ms to restartPrimaryControlPlane
	I0408 19:29:52.944795 1040091 kubeadm.go:393] duration metric: took 82.019319ms to StartCluster
	I0408 19:29:52.944811 1040091 settings.go:142] acquiring lock: {Name:mk5026d653ab6560d4c2e7a68e9bc77339a3813a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:29:52.944881 1040091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 19:29:52.945860 1040091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/kubeconfig: {Name:mk2667c6d217e28cc639f1cedf47734a14602005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:29:52.946115 1040091 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0408 19:29:52.949407 1040091 out.go:177] * Verifying Kubernetes components...
	I0408 19:29:52.946503 1040091 config.go:182] Loaded profile config "old-k8s-version-540675": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0408 19:29:52.946480 1040091 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 19:29:52.951292 1040091 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-540675"
	I0408 19:29:52.951310 1040091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:29:52.951316 1040091 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-540675"
	W0408 19:29:52.951324 1040091 addons.go:243] addon storage-provisioner should already be in state true
	I0408 19:29:52.951353 1040091 host.go:66] Checking if "old-k8s-version-540675" exists ...
	I0408 19:29:52.951403 1040091 addons.go:69] Setting dashboard=true in profile "old-k8s-version-540675"
	I0408 19:29:52.951425 1040091 addons.go:234] Setting addon dashboard=true in "old-k8s-version-540675"
	W0408 19:29:52.951431 1040091 addons.go:243] addon dashboard should already be in state true
	I0408 19:29:52.951450 1040091 host.go:66] Checking if "old-k8s-version-540675" exists ...
	I0408 19:29:52.951845 1040091 cli_runner.go:164] Run: docker container inspect old-k8s-version-540675 --format={{.State.Status}}
	I0408 19:29:52.951851 1040091 cli_runner.go:164] Run: docker container inspect old-k8s-version-540675 --format={{.State.Status}}
	I0408 19:29:52.952251 1040091 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-540675"
	I0408 19:29:52.952323 1040091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-540675"
	I0408 19:29:52.952323 1040091 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-540675"
	I0408 19:29:52.952838 1040091 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-540675"
	W0408 19:29:52.952920 1040091 addons.go:243] addon metrics-server should already be in state true
	I0408 19:29:52.952966 1040091 host.go:66] Checking if "old-k8s-version-540675" exists ...
	I0408 19:29:52.953467 1040091 cli_runner.go:164] Run: docker container inspect old-k8s-version-540675 --format={{.State.Status}}
	I0408 19:29:52.955770 1040091 cli_runner.go:164] Run: docker container inspect old-k8s-version-540675 --format={{.State.Status}}
	I0408 19:29:52.993399 1040091 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:29:52.995692 1040091 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:29:52.995715 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 19:29:52.995782 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:53.002067 1040091 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0408 19:29:53.003872 1040091 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0408 19:29:53.007531 1040091 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0408 19:29:53.007569 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0408 19:29:53.007650 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:53.020635 1040091 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 19:29:53.026122 1040091 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 19:29:53.026147 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 19:29:53.026225 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:53.036448 1040091 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-540675"
	W0408 19:29:53.036479 1040091 addons.go:243] addon default-storageclass should already be in state true
	I0408 19:29:53.036506 1040091 host.go:66] Checking if "old-k8s-version-540675" exists ...
	I0408 19:29:53.036917 1040091 cli_runner.go:164] Run: docker container inspect old-k8s-version-540675 --format={{.State.Status}}
	I0408 19:29:53.066511 1040091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33860 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/old-k8s-version-540675/id_rsa Username:docker}
	I0408 19:29:53.094471 1040091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33860 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/old-k8s-version-540675/id_rsa Username:docker}
	I0408 19:29:53.108535 1040091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33860 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/old-k8s-version-540675/id_rsa Username:docker}
	I0408 19:29:53.109256 1040091 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 19:29:53.109271 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 19:29:53.109321 1040091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-540675
	I0408 19:29:53.134734 1040091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33860 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/old-k8s-version-540675/id_rsa Username:docker}
	I0408 19:29:53.157242 1040091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:29:53.196024 1040091 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-540675" to be "Ready" ...
	I0408 19:29:53.211277 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:29:53.229270 1040091 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 19:29:53.229331 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 19:29:53.253453 1040091 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0408 19:29:53.253516 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0408 19:29:53.273062 1040091 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 19:29:53.273125 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 19:29:53.289355 1040091 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0408 19:29:53.289424 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0408 19:29:53.316605 1040091 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 19:29:53.316677 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 19:29:53.330770 1040091 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0408 19:29:53.330834 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0408 19:29:53.338965 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 19:29:53.359112 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0408 19:29:53.370415 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.370516 1040091 retry.go:31] will retry after 301.447909ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.378871 1040091 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0408 19:29:53.378933 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0408 19:29:53.434130 1040091 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0408 19:29:53.434201 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0408 19:29:53.492961 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.493000 1040091 retry.go:31] will retry after 294.152182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0408 19:29:53.493048 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.493064 1040091 retry.go:31] will retry after 153.539813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.495126 1040091 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0408 19:29:53.495150 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0408 19:29:53.513608 1040091 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0408 19:29:53.513646 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0408 19:29:53.532257 1040091 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0408 19:29:53.532281 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0408 19:29:53.551250 1040091 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:29:53.551273 1040091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0408 19:29:53.571085 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:29:53.647181 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0408 19:29:53.658421 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.658458 1040091 retry.go:31] will retry after 208.838806ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.672749 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0408 19:29:53.722910 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.722942 1040091 retry.go:31] will retry after 313.3626ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0408 19:29:53.769393 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.769427 1040091 retry.go:31] will retry after 306.841431ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.787641 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0408 19:29:53.861498 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.861534 1040091 retry.go:31] will retry after 427.105609ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.867665 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0408 19:29:53.950059 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:53.950101 1040091 retry.go:31] will retry after 297.931489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.037287 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 19:29:54.076822 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0408 19:29:54.146359 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.146391 1040091 retry.go:31] will retry after 531.04144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0408 19:29:54.169598 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.169734 1040091 retry.go:31] will retry after 298.358112ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.248392 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:29:54.289779 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0408 19:29:54.335582 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.335614 1040091 retry.go:31] will retry after 432.544899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0408 19:29:54.374229 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.374259 1040091 retry.go:31] will retry after 420.802765ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.468444 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0408 19:29:54.543108 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.543143 1040091 retry.go:31] will retry after 435.592521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.678562 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0408 19:29:54.753681 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.753714 1040091 retry.go:31] will retry after 482.615643ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.768935 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:29:54.795490 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0408 19:29:54.856096 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.856169 1040091 retry.go:31] will retry after 879.907299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0408 19:29:54.889177 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.889208 1040091 retry.go:31] will retry after 679.483349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:54.979430 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0408 19:29:55.066420 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:55.066458 1040091 retry.go:31] will retry after 1.281235552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:55.197221 1040091 node_ready.go:53] error getting node "old-k8s-version-540675": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-540675": dial tcp 192.168.76.2:8443: connect: connection refused
	I0408 19:29:55.237436 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0408 19:29:55.309217 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:55.309252 1040091 retry.go:31] will retry after 972.231757ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:55.569645 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0408 19:29:55.646697 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:55.646729 1040091 retry.go:31] will retry after 1.627856552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:55.736450 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0408 19:29:55.814995 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:55.815030 1040091 retry.go:31] will retry after 1.831345085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:56.281990 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 19:29:56.348556 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0408 19:29:56.361438 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:56.361510 1040091 retry.go:31] will retry after 1.821560513s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0408 19:29:56.427196 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:56.427282 1040091 retry.go:31] will retry after 2.409679495s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:57.274836 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0408 19:29:57.348909 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:57.348944 1040091 retry.go:31] will retry after 1.293869328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:57.647343 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:29:57.697362 1040091 node_ready.go:53] error getting node "old-k8s-version-540675": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-540675": dial tcp 192.168.76.2:8443: connect: connection refused
	W0408 19:29:57.726153 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:57.726239 1040091 retry.go:31] will retry after 1.69179233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:58.183978 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0408 19:29:58.259802 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:58.259838 1040091 retry.go:31] will retry after 2.233296567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:58.643041 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0408 19:29:58.725290 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:58.725325 1040091 retry.go:31] will retry after 3.793511181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:58.837625 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0408 19:29:58.913493 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:58.913532 1040091 retry.go:31] will retry after 3.904630496s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:59.418677 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0408 19:29:59.488730 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:59.488765 1040091 retry.go:31] will retry after 4.235422005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:29:59.697507 1040091 node_ready.go:53] error getting node "old-k8s-version-540675": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-540675": dial tcp 192.168.76.2:8443: connect: connection refused
	I0408 19:30:00.493468 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0408 19:30:00.601485 1040091 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:30:00.601526 1040091 retry.go:31] will retry after 6.279302291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0408 19:30:02.519013 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0408 19:30:02.818605 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:30:03.724919 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:30:06.881528 1040091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 19:30:09.408369 1040091 node_ready.go:49] node "old-k8s-version-540675" has status "Ready":"True"
	I0408 19:30:09.408392 1040091 node_ready.go:38] duration metric: took 16.21228658s for node "old-k8s-version-540675" to be "Ready" ...
	I0408 19:30:09.408402 1040091 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 19:30:09.604879 1040091 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-8jdhp" in "kube-system" namespace to be "Ready" ...
	I0408 19:30:09.647218 1040091 pod_ready.go:92] pod "coredns-74ff55c5b-8jdhp" in "kube-system" namespace has status "Ready":"True"
	I0408 19:30:09.647291 1040091 pod_ready.go:81] duration metric: took 42.332921ms for pod "coredns-74ff55c5b-8jdhp" in "kube-system" namespace to be "Ready" ...
	I0408 19:30:09.647318 1040091 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-540675" in "kube-system" namespace to be "Ready" ...
	I0408 19:30:09.669853 1040091 pod_ready.go:92] pod "etcd-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"True"
	I0408 19:30:09.669927 1040091 pod_ready.go:81] duration metric: took 22.588758ms for pod "etcd-old-k8s-version-540675" in "kube-system" namespace to be "Ready" ...
	I0408 19:30:09.669963 1040091 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-540675" in "kube-system" namespace to be "Ready" ...
	I0408 19:30:10.394575 1040091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.575930273s)
	I0408 19:30:10.394657 1040091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.875600076s)
	I0408 19:30:10.692004 1040091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.967004425s)
	I0408 19:30:10.694361 1040091 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-540675 addons enable metrics-server
	
	I0408 19:30:10.692345 1040091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.810777369s)
	I0408 19:30:10.696250 1040091 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-540675"
	I0408 19:30:10.698474 1040091 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, dashboard, metrics-server
	I0408 19:30:10.700100 1040091 addons.go:505] duration metric: took 17.753619519s for enable addons: enabled=[storage-provisioner default-storageclass dashboard metrics-server]
	I0408 19:30:11.678162 1040091 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:14.175862 1040091 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:16.176731 1040091 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:17.677909 1040091 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"True"
	I0408 19:30:17.677937 1040091 pod_ready.go:81] duration metric: took 8.007953528s for pod "kube-apiserver-old-k8s-version-540675" in "kube-system" namespace to be "Ready" ...
	I0408 19:30:17.677949 1040091 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace to be "Ready" ...
	I0408 19:30:19.687099 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:22.185183 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:24.684364 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:27.185159 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:29.185832 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:31.683974 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:33.691067 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:36.184373 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:38.185163 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:40.185818 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:42.685558 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:45.187390 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:47.683980 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:49.684348 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:51.684975 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:53.685130 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:56.184921 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:30:58.186446 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:00.210272 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:02.685046 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:04.685830 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:07.185158 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:09.685467 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:12.184940 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:14.196205 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:16.685523 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:19.185031 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:21.185079 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:23.187971 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:25.684463 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:28.185554 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:30.186205 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:32.684578 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:34.684899 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:36.685248 1040091 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:38.185445 1040091 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"True"
	I0408 19:31:38.185469 1040091 pod_ready.go:81] duration metric: took 1m20.507511794s for pod "kube-controller-manager-old-k8s-version-540675" in "kube-system" namespace to be "Ready" ...
	I0408 19:31:38.185484 1040091 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jsgdk" in "kube-system" namespace to be "Ready" ...
	I0408 19:31:38.191161 1040091 pod_ready.go:92] pod "kube-proxy-jsgdk" in "kube-system" namespace has status "Ready":"True"
	I0408 19:31:38.191186 1040091 pod_ready.go:81] duration metric: took 5.69405ms for pod "kube-proxy-jsgdk" in "kube-system" namespace to be "Ready" ...
	I0408 19:31:38.191197 1040091 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-540675" in "kube-system" namespace to be "Ready" ...
	I0408 19:31:38.196274 1040091 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-540675" in "kube-system" namespace has status "Ready":"True"
	I0408 19:31:38.196300 1040091 pod_ready.go:81] duration metric: took 5.094853ms for pod "kube-scheduler-old-k8s-version-540675" in "kube-system" namespace to be "Ready" ...
	I0408 19:31:38.196313 1040091 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace to be "Ready" ...
	I0408 19:31:40.203428 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:42.203704 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:44.217038 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:46.704639 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:49.202362 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:51.202780 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:53.205306 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:55.702476 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:31:57.703382 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:00.303588 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:02.702677 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:05.202390 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:07.203524 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:09.702420 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:11.703800 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:14.202939 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:16.202977 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:18.702083 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:20.705161 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:23.202958 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:25.203719 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:27.205089 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:29.702583 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:32.203238 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:34.203674 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:36.204298 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:38.702284 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:40.703200 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:43.202788 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:45.204782 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:47.702947 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:50.203535 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:52.703816 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:55.202201 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:57.703153 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:32:59.705212 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:02.202642 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:04.203769 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:06.703421 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:09.202254 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:11.202781 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:13.703665 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:16.203173 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:18.703001 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:20.773176 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:23.203458 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:25.702929 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:28.203689 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:30.203855 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:32.704088 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:35.203380 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:37.203675 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:39.704628 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:42.204858 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:44.703046 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:46.704552 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:49.201870 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:51.203093 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:53.203163 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:55.203369 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:57.702262 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:33:59.702797 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:01.703269 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:04.204585 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:06.702865 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:08.703470 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:11.219749 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:13.703805 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:16.202933 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:18.203378 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:20.203599 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:22.702633 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:25.203584 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:27.702458 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:29.702505 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:31.702863 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:33.704171 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:36.202974 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:38.703448 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:41.202631 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:43.203378 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:45.204354 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:47.702378 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:49.705623 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:52.202096 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:54.203076 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:56.711658 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:34:59.204599 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:01.703993 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:04.202677 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:06.203053 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:08.702778 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:10.702879 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:12.704106 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:15.203764 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:17.203860 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:19.209052 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:21.709399 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:24.203105 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:26.210835 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:28.703832 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:30.704470 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:33.210611 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:35.703183 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:37.703715 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:38.204719 1040091 pod_ready.go:81] duration metric: took 4m0.008393212s for pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace to be "Ready" ...
	E0408 19:35:38.204742 1040091 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0408 19:35:38.204751 1040091 pod_ready.go:38] duration metric: took 5m28.796338351s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 19:35:38.204765 1040091 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:35:38.204792 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:35:38.204853 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:35:38.260969 1040091 cri.go:89] found id: "9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6"
	I0408 19:35:38.261043 1040091 cri.go:89] found id: "b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061"
	I0408 19:35:38.261071 1040091 cri.go:89] found id: ""
	I0408 19:35:38.261091 1040091 logs.go:276] 2 containers: [9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6 b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061]
	I0408 19:35:38.261172 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.265489 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.269691 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0408 19:35:38.269766 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:35:38.338094 1040091 cri.go:89] found id: "7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3"
	I0408 19:35:38.338113 1040091 cri.go:89] found id: "db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a"
	I0408 19:35:38.338118 1040091 cri.go:89] found id: ""
	I0408 19:35:38.338125 1040091 logs.go:276] 2 containers: [7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3 db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a]
	I0408 19:35:38.338179 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.342136 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.346077 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0408 19:35:38.346192 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:35:38.400613 1040091 cri.go:89] found id: "7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2"
	I0408 19:35:38.400688 1040091 cri.go:89] found id: "3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892"
	I0408 19:35:38.400695 1040091 cri.go:89] found id: ""
	I0408 19:35:38.400702 1040091 logs.go:276] 2 containers: [7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2 3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892]
	I0408 19:35:38.400789 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.407694 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.411547 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:35:38.411673 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:35:38.467628 1040091 cri.go:89] found id: "edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90"
	I0408 19:35:38.467696 1040091 cri.go:89] found id: "a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f"
	I0408 19:35:38.467715 1040091 cri.go:89] found id: ""
	I0408 19:35:38.467737 1040091 logs.go:276] 2 containers: [edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90 a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f]
	I0408 19:35:38.467821 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.488655 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.496507 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:35:38.496686 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:35:38.554128 1040091 cri.go:89] found id: "69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf"
	I0408 19:35:38.554201 1040091 cri.go:89] found id: "63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070"
	I0408 19:35:38.554220 1040091 cri.go:89] found id: ""
	I0408 19:35:38.554240 1040091 logs.go:276] 2 containers: [69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf 63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070]
	I0408 19:35:38.554324 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.558767 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.562377 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:35:38.562508 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:35:38.624832 1040091 cri.go:89] found id: "f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46"
	I0408 19:35:38.624905 1040091 cri.go:89] found id: "172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58"
	I0408 19:35:38.624924 1040091 cri.go:89] found id: ""
	I0408 19:35:38.624942 1040091 logs.go:276] 2 containers: [f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46 172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58]
	I0408 19:35:38.625026 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.629409 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.633137 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0408 19:35:38.633271 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:35:38.681443 1040091 cri.go:89] found id: "19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93"
	I0408 19:35:38.681514 1040091 cri.go:89] found id: "544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356"
	I0408 19:35:38.681530 1040091 cri.go:89] found id: ""
	I0408 19:35:38.681564 1040091 logs.go:276] 2 containers: [19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93 544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356]
	I0408 19:35:38.681659 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.685857 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.689903 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:35:38.690078 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:35:38.742785 1040091 cri.go:89] found id: "804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940"
	I0408 19:35:38.742857 1040091 cri.go:89] found id: ""
	I0408 19:35:38.742895 1040091 logs.go:276] 1 containers: [804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940]
	I0408 19:35:38.742984 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.749020 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0408 19:35:38.749137 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 19:35:38.804860 1040091 cri.go:89] found id: "15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19"
	I0408 19:35:38.804927 1040091 cri.go:89] found id: "0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63"
	I0408 19:35:38.804946 1040091 cri.go:89] found id: ""
	I0408 19:35:38.804968 1040091 logs.go:276] 2 containers: [15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19 0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63]
	I0408 19:35:38.805052 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.809143 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.812859 1040091 logs.go:123] Gathering logs for dmesg ...
	I0408 19:35:38.812937 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:35:38.837235 1040091 logs.go:123] Gathering logs for coredns [7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2] ...
	I0408 19:35:38.837316 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2"
	I0408 19:35:38.888003 1040091 logs.go:123] Gathering logs for kindnet [19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93] ...
	I0408 19:35:38.888080 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93"
	I0408 19:35:38.949725 1040091 logs.go:123] Gathering logs for kindnet [544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356] ...
	I0408 19:35:38.949806 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356"
	I0408 19:35:39.046663 1040091 logs.go:123] Gathering logs for kubernetes-dashboard [804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940] ...
	I0408 19:35:39.046743 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940"
	I0408 19:35:39.102564 1040091 logs.go:123] Gathering logs for kube-controller-manager [f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46] ...
	I0408 19:35:39.102644 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46"
	I0408 19:35:39.204565 1040091 logs.go:123] Gathering logs for storage-provisioner [15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19] ...
	I0408 19:35:39.204644 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19"
	I0408 19:35:39.293176 1040091 logs.go:123] Gathering logs for kube-scheduler [a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f] ...
	I0408 19:35:39.293252 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f"
	I0408 19:35:39.354667 1040091 logs.go:123] Gathering logs for kube-proxy [69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf] ...
	I0408 19:35:39.354741 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf"
	I0408 19:35:39.407570 1040091 logs.go:123] Gathering logs for kube-proxy [63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070] ...
	I0408 19:35:39.407645 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070"
	I0408 19:35:39.462603 1040091 logs.go:123] Gathering logs for kubelet ...
	I0408 19:35:39.462679 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 19:35:39.526129 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354687     663 reflector.go:138] object-"default"/"default-token-gzsv4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gzsv4" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.526453 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354764     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.526695 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354859     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-zmg69": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-zmg69" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.526948 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.356924     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vcs78": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vcs78" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.527181 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.356979     663 reflector.go:138] object-"kube-system"/"coredns-token-w52sl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-w52sl" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.527440 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.357025     663 reflector.go:138] object-"kube-system"/"metrics-server-token-zxgqt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zxgqt" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.527671 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.357068     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.527912 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.374468     663 reflector.go:138] object-"kube-system"/"kindnet-token-h6csz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h6csz" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.536148 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:11 old-k8s-version-540675 kubelet[663]: E0408 19:30:11.776102     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:39.537747 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:12 old-k8s-version-540675 kubelet[663]: E0408 19:30:12.254992     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.540638 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:23 old-k8s-version-540675 kubelet[663]: E0408 19:30:23.971489     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:39.542852 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:34 old-k8s-version-540675 kubelet[663]: E0408 19:30:34.355720     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.543072 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:34 old-k8s-version-540675 kubelet[663]: E0408 19:30:34.977351     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.543423 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:35 old-k8s-version-540675 kubelet[663]: E0408 19:30:35.361567     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.543774 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:38 old-k8s-version-540675 kubelet[663]: E0408 19:30:38.654910     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.546650 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:49 old-k8s-version-540675 kubelet[663]: E0408 19:30:49.978868     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:39.547658 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:52 old-k8s-version-540675 kubelet[663]: E0408 19:30:52.422181     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.548051 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:58 old-k8s-version-540675 kubelet[663]: E0408 19:30:58.654477     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.548261 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:02 old-k8s-version-540675 kubelet[663]: E0408 19:31:02.963701     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.548608 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:09 old-k8s-version-540675 kubelet[663]: E0408 19:31:09.963237     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.548815 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:17 old-k8s-version-540675 kubelet[663]: E0408 19:31:17.964344     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.549436 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:21 old-k8s-version-540675 kubelet[663]: E0408 19:31:21.505624     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.549823 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:28 old-k8s-version-540675 kubelet[663]: E0408 19:31:28.654754     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.550041 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:29 old-k8s-version-540675 kubelet[663]: E0408 19:31:29.963597     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.550389 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:38 old-k8s-version-540675 kubelet[663]: E0408 19:31:38.963843     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.552851 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:40 old-k8s-version-540675 kubelet[663]: E0408 19:31:40.981110     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:39.553204 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:49 old-k8s-version-540675 kubelet[663]: E0408 19:31:49.963312     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.553410 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:53 old-k8s-version-540675 kubelet[663]: E0408 19:31:53.963765     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.554027 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:02 old-k8s-version-540675 kubelet[663]: E0408 19:32:02.592991     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.554234 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:07 old-k8s-version-540675 kubelet[663]: E0408 19:32:07.963579     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.554610 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:08 old-k8s-version-540675 kubelet[663]: E0408 19:32:08.654229     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.554822 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:18 old-k8s-version-540675 kubelet[663]: E0408 19:32:18.966254     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.555241 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:19 old-k8s-version-540675 kubelet[663]: E0408 19:32:19.963192     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.555460 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:29 old-k8s-version-540675 kubelet[663]: E0408 19:32:29.963578     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.555821 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:32 old-k8s-version-540675 kubelet[663]: E0408 19:32:32.963790     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.556031 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:42 old-k8s-version-540675 kubelet[663]: E0408 19:32:42.963650     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.556383 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:43 old-k8s-version-540675 kubelet[663]: E0408 19:32:43.963376     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.556593 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:53 old-k8s-version-540675 kubelet[663]: E0408 19:32:53.963583     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.556942 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:58 old-k8s-version-540675 kubelet[663]: E0408 19:32:58.964157     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.559402 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:04 old-k8s-version-540675 kubelet[663]: E0408 19:33:04.972212     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:39.559775 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:09 old-k8s-version-540675 kubelet[663]: E0408 19:33:09.963440     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.559984 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:17 old-k8s-version-540675 kubelet[663]: E0408 19:33:17.963569     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.560333 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:21 old-k8s-version-540675 kubelet[663]: E0408 19:33:21.963178     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.560555 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:29 old-k8s-version-540675 kubelet[663]: E0408 19:33:29.963894     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.561176 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:36 old-k8s-version-540675 kubelet[663]: E0408 19:33:36.797444     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.561527 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:38 old-k8s-version-540675 kubelet[663]: E0408 19:33:38.654611     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.561739 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:42 old-k8s-version-540675 kubelet[663]: E0408 19:33:42.965353     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.562098 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:52 old-k8s-version-540675 kubelet[663]: E0408 19:33:52.963651     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.562305 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:57 old-k8s-version-540675 kubelet[663]: E0408 19:33:57.963546     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.562652 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:05 old-k8s-version-540675 kubelet[663]: E0408 19:34:05.963339     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.562857 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:10 old-k8s-version-540675 kubelet[663]: E0408 19:34:10.966560     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.563206 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:16 old-k8s-version-540675 kubelet[663]: E0408 19:34:16.963733     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.563414 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:21 old-k8s-version-540675 kubelet[663]: E0408 19:34:21.963613     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.563834 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:30 old-k8s-version-540675 kubelet[663]: E0408 19:34:30.964250     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.564076 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:36 old-k8s-version-540675 kubelet[663]: E0408 19:34:36.963651     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.564426 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:44 old-k8s-version-540675 kubelet[663]: E0408 19:34:44.963713     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.564668 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:51 old-k8s-version-540675 kubelet[663]: E0408 19:34:51.963845     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.565017 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:55 old-k8s-version-540675 kubelet[663]: E0408 19:34:55.963188     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.565237 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:02 old-k8s-version-540675 kubelet[663]: E0408 19:35:02.969106     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.565591 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:10 old-k8s-version-540675 kubelet[663]: E0408 19:35:10.966162     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.565803 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:15 old-k8s-version-540675 kubelet[663]: E0408 19:35:15.963675     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.566184 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: E0408 19:35:23.963177     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.566391 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:29 old-k8s-version-540675 kubelet[663]: E0408 19:35:29.963846     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.566741 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: E0408 19:35:38.967622     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	I0408 19:35:39.566763 1040091 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:35:39.566787 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 19:35:39.775939 1040091 logs.go:123] Gathering logs for kube-apiserver [b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061] ...
	I0408 19:35:39.775971 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061"
	I0408 19:35:39.837916 1040091 logs.go:123] Gathering logs for etcd [db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a] ...
	I0408 19:35:39.837960 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a"
	I0408 19:35:39.887500 1040091 logs.go:123] Gathering logs for kube-scheduler [edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90] ...
	I0408 19:35:39.887527 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90"
	I0408 19:35:39.934653 1040091 logs.go:123] Gathering logs for storage-provisioner [0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63] ...
	I0408 19:35:39.934681 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63"
	I0408 19:35:39.981694 1040091 logs.go:123] Gathering logs for container status ...
	I0408 19:35:39.981722 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:35:40.058506 1040091 logs.go:123] Gathering logs for kube-apiserver [9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6] ...
	I0408 19:35:40.058535 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6"
	I0408 19:35:40.152070 1040091 logs.go:123] Gathering logs for etcd [7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3] ...
	I0408 19:35:40.152105 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3"
	I0408 19:35:40.226742 1040091 logs.go:123] Gathering logs for coredns [3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892] ...
	I0408 19:35:40.226821 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892"
	I0408 19:35:40.268329 1040091 logs.go:123] Gathering logs for kube-controller-manager [172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58] ...
	I0408 19:35:40.268355 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58"
	I0408 19:35:40.352663 1040091 logs.go:123] Gathering logs for containerd ...
	I0408 19:35:40.352701 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0408 19:35:40.415919 1040091 out.go:304] Setting ErrFile to fd 2...
	I0408 19:35:40.415953 1040091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 19:35:40.416027 1040091 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 19:35:40.416039 1040091 out.go:239]   Apr 08 19:35:10 old-k8s-version-540675 kubelet[663]: E0408 19:35:10.966162     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	  Apr 08 19:35:10 old-k8s-version-540675 kubelet[663]: E0408 19:35:10.966162     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:40.416048 1040091 out.go:239]   Apr 08 19:35:15 old-k8s-version-540675 kubelet[663]: E0408 19:35:15.963675     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 08 19:35:15 old-k8s-version-540675 kubelet[663]: E0408 19:35:15.963675     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:40.416064 1040091 out.go:239]   Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: E0408 19:35:23.963177     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	  Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: E0408 19:35:23.963177     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:40.416090 1040091 out.go:239]   Apr 08 19:35:29 old-k8s-version-540675 kubelet[663]: E0408 19:35:29.963846     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 08 19:35:29 old-k8s-version-540675 kubelet[663]: E0408 19:35:29.963846     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:40.416105 1040091 out.go:239]   Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: E0408 19:35:38.967622     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	  Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: E0408 19:35:38.967622     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	I0408 19:35:40.416123 1040091 out.go:304] Setting ErrFile to fd 2...
	I0408 19:35:40.416136 1040091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:35:50.417548 1040091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:35:50.432523 1040091 api_server.go:72] duration metric: took 5m57.486373343s to wait for apiserver process to appear ...
	I0408 19:35:50.432547 1040091 api_server.go:88] waiting for apiserver healthz status ...
	I0408 19:35:50.432581 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:35:50.432639 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:35:50.485645 1040091 cri.go:89] found id: "9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6"
	I0408 19:35:50.485666 1040091 cri.go:89] found id: "b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061"
	I0408 19:35:50.485671 1040091 cri.go:89] found id: ""
	I0408 19:35:50.485678 1040091 logs.go:276] 2 containers: [9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6 b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061]
	I0408 19:35:50.485734 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.489789 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.494693 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0408 19:35:50.494763 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:35:50.535558 1040091 cri.go:89] found id: "7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3"
	I0408 19:35:50.535578 1040091 cri.go:89] found id: "db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a"
	I0408 19:35:50.535583 1040091 cri.go:89] found id: ""
	I0408 19:35:50.535591 1040091 logs.go:276] 2 containers: [7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3 db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a]
	I0408 19:35:50.535649 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.540120 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.544033 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0408 19:35:50.544107 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:35:50.588243 1040091 cri.go:89] found id: "7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2"
	I0408 19:35:50.588265 1040091 cri.go:89] found id: "3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892"
	I0408 19:35:50.588270 1040091 cri.go:89] found id: ""
	I0408 19:35:50.588286 1040091 logs.go:276] 2 containers: [7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2 3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892]
	I0408 19:35:50.588351 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.592067 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.595759 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:35:50.595951 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:35:50.657884 1040091 cri.go:89] found id: "edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90"
	I0408 19:35:50.657909 1040091 cri.go:89] found id: "a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f"
	I0408 19:35:50.657915 1040091 cri.go:89] found id: ""
	I0408 19:35:50.657933 1040091 logs.go:276] 2 containers: [edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90 a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f]
	I0408 19:35:50.657990 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.663278 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.667093 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:35:50.667170 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:35:50.704272 1040091 cri.go:89] found id: "69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf"
	I0408 19:35:50.704296 1040091 cri.go:89] found id: "63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070"
	I0408 19:35:50.704301 1040091 cri.go:89] found id: ""
	I0408 19:35:50.704309 1040091 logs.go:276] 2 containers: [69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf 63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070]
	I0408 19:35:50.704384 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.708641 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.712265 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:35:50.712341 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:35:50.754783 1040091 cri.go:89] found id: "f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46"
	I0408 19:35:50.754802 1040091 cri.go:89] found id: "172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58"
	I0408 19:35:50.754806 1040091 cri.go:89] found id: ""
	I0408 19:35:50.754813 1040091 logs.go:276] 2 containers: [f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46 172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58]
	I0408 19:35:50.754884 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.759728 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.763582 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0408 19:35:50.763672 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:35:50.815649 1040091 cri.go:89] found id: "19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93"
	I0408 19:35:50.815671 1040091 cri.go:89] found id: "544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356"
	I0408 19:35:50.815676 1040091 cri.go:89] found id: ""
	I0408 19:35:50.815683 1040091 logs.go:276] 2 containers: [19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93 544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356]
	I0408 19:35:50.815789 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.819818 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.823422 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:35:50.823518 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:35:50.868462 1040091 cri.go:89] found id: "804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940"
	I0408 19:35:50.868531 1040091 cri.go:89] found id: ""
	I0408 19:35:50.868554 1040091 logs.go:276] 1 containers: [804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940]
	I0408 19:35:50.868636 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.872672 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0408 19:35:50.872769 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 19:35:50.931700 1040091 cri.go:89] found id: "15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19"
	I0408 19:35:50.931728 1040091 cri.go:89] found id: "0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63"
	I0408 19:35:50.931733 1040091 cri.go:89] found id: ""
	I0408 19:35:50.931740 1040091 logs.go:276] 2 containers: [15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19 0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63]
	I0408 19:35:50.931852 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.936883 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.941169 1040091 logs.go:123] Gathering logs for container status ...
	I0408 19:35:50.941198 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:35:51.033741 1040091 logs.go:123] Gathering logs for etcd [7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3] ...
	I0408 19:35:51.033773 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3"
	I0408 19:35:51.083564 1040091 logs.go:123] Gathering logs for etcd [db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a] ...
	I0408 19:35:51.083594 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a"
	I0408 19:35:51.133070 1040091 logs.go:123] Gathering logs for coredns [7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2] ...
	I0408 19:35:51.133100 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2"
	I0408 19:35:51.188619 1040091 logs.go:123] Gathering logs for kube-controller-manager [172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58] ...
	I0408 19:35:51.188647 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58"
	I0408 19:35:51.267350 1040091 logs.go:123] Gathering logs for kindnet [19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93] ...
	I0408 19:35:51.267413 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93"
	I0408 19:35:51.345361 1040091 logs.go:123] Gathering logs for containerd ...
	I0408 19:35:51.345387 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0408 19:35:51.417368 1040091 logs.go:123] Gathering logs for dmesg ...
	I0408 19:35:51.417403 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:35:51.438377 1040091 logs.go:123] Gathering logs for coredns [3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892] ...
	I0408 19:35:51.438408 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892"
	I0408 19:35:51.485566 1040091 logs.go:123] Gathering logs for kubernetes-dashboard [804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940] ...
	I0408 19:35:51.485595 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940"
	I0408 19:35:51.535300 1040091 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:35:51.535328 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 19:35:51.685266 1040091 logs.go:123] Gathering logs for kube-apiserver [9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6] ...
	I0408 19:35:51.685299 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6"
	I0408 19:35:51.747971 1040091 logs.go:123] Gathering logs for kube-apiserver [b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061] ...
	I0408 19:35:51.748010 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061"
	I0408 19:35:51.816472 1040091 logs.go:123] Gathering logs for kube-scheduler [a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f] ...
	I0408 19:35:51.816521 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f"
	I0408 19:35:51.862579 1040091 logs.go:123] Gathering logs for kube-proxy [69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf] ...
	I0408 19:35:51.862611 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf"
	I0408 19:35:51.920721 1040091 logs.go:123] Gathering logs for kindnet [544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356] ...
	I0408 19:35:51.920750 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356"
	I0408 19:35:51.996944 1040091 logs.go:123] Gathering logs for kubelet ...
	I0408 19:35:51.997012 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 19:35:52.054348 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354687     663 reflector.go:138] object-"default"/"default-token-gzsv4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gzsv4" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.054620 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354764     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.054845 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354859     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-zmg69": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-zmg69" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.055084 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.356924     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vcs78": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vcs78" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.055301 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.356979     663 reflector.go:138] object-"kube-system"/"coredns-token-w52sl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-w52sl" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.055543 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.357025     663 reflector.go:138] object-"kube-system"/"metrics-server-token-zxgqt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zxgqt" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.055828 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.357068     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.056051 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.374468     663 reflector.go:138] object-"kube-system"/"kindnet-token-h6csz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h6csz" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.063919 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:11 old-k8s-version-540675 kubelet[663]: E0408 19:30:11.776102     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:52.065482 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:12 old-k8s-version-540675 kubelet[663]: E0408 19:30:12.254992     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.068355 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:23 old-k8s-version-540675 kubelet[663]: E0408 19:30:23.971489     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:52.070485 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:34 old-k8s-version-540675 kubelet[663]: E0408 19:30:34.355720     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.070671 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:34 old-k8s-version-540675 kubelet[663]: E0408 19:30:34.977351     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.071001 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:35 old-k8s-version-540675 kubelet[663]: E0408 19:30:35.361567     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.071330 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:38 old-k8s-version-540675 kubelet[663]: E0408 19:30:38.654910     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.074126 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:49 old-k8s-version-540675 kubelet[663]: E0408 19:30:49.978868     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:52.075075 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:52 old-k8s-version-540675 kubelet[663]: E0408 19:30:52.422181     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.075409 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:58 old-k8s-version-540675 kubelet[663]: E0408 19:30:58.654477     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.075596 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:02 old-k8s-version-540675 kubelet[663]: E0408 19:31:02.963701     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.075924 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:09 old-k8s-version-540675 kubelet[663]: E0408 19:31:09.963237     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.076108 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:17 old-k8s-version-540675 kubelet[663]: E0408 19:31:17.964344     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.076700 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:21 old-k8s-version-540675 kubelet[663]: E0408 19:31:21.505624     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.077027 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:28 old-k8s-version-540675 kubelet[663]: E0408 19:31:28.654754     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.077211 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:29 old-k8s-version-540675 kubelet[663]: E0408 19:31:29.963597     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.077547 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:38 old-k8s-version-540675 kubelet[663]: E0408 19:31:38.963843     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.080029 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:40 old-k8s-version-540675 kubelet[663]: E0408 19:31:40.981110     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:52.080360 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:49 old-k8s-version-540675 kubelet[663]: E0408 19:31:49.963312     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.080544 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:53 old-k8s-version-540675 kubelet[663]: E0408 19:31:53.963765     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.081137 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:02 old-k8s-version-540675 kubelet[663]: E0408 19:32:02.592991     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.081326 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:07 old-k8s-version-540675 kubelet[663]: E0408 19:32:07.963579     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.081656 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:08 old-k8s-version-540675 kubelet[663]: E0408 19:32:08.654229     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.081856 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:18 old-k8s-version-540675 kubelet[663]: E0408 19:32:18.966254     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.082190 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:19 old-k8s-version-540675 kubelet[663]: E0408 19:32:19.963192     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.082377 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:29 old-k8s-version-540675 kubelet[663]: E0408 19:32:29.963578     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.082705 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:32 old-k8s-version-540675 kubelet[663]: E0408 19:32:32.963790     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.082891 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:42 old-k8s-version-540675 kubelet[663]: E0408 19:32:42.963650     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.083218 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:43 old-k8s-version-540675 kubelet[663]: E0408 19:32:43.963376     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.083403 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:53 old-k8s-version-540675 kubelet[663]: E0408 19:32:53.963583     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.083731 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:58 old-k8s-version-540675 kubelet[663]: E0408 19:32:58.964157     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.086182 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:04 old-k8s-version-540675 kubelet[663]: E0408 19:33:04.972212     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:52.086562 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:09 old-k8s-version-540675 kubelet[663]: E0408 19:33:09.963440     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.086750 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:17 old-k8s-version-540675 kubelet[663]: E0408 19:33:17.963569     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.087077 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:21 old-k8s-version-540675 kubelet[663]: E0408 19:33:21.963178     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.087263 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:29 old-k8s-version-540675 kubelet[663]: E0408 19:33:29.963894     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.087856 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:36 old-k8s-version-540675 kubelet[663]: E0408 19:33:36.797444     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.088191 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:38 old-k8s-version-540675 kubelet[663]: E0408 19:33:38.654611     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.088376 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:42 old-k8s-version-540675 kubelet[663]: E0408 19:33:42.965353     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.088705 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:52 old-k8s-version-540675 kubelet[663]: E0408 19:33:52.963651     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.088891 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:57 old-k8s-version-540675 kubelet[663]: E0408 19:33:57.963546     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.089220 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:05 old-k8s-version-540675 kubelet[663]: E0408 19:34:05.963339     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.089404 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:10 old-k8s-version-540675 kubelet[663]: E0408 19:34:10.966560     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.089732 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:16 old-k8s-version-540675 kubelet[663]: E0408 19:34:16.963733     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.089960 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:21 old-k8s-version-540675 kubelet[663]: E0408 19:34:21.963613     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.090315 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:30 old-k8s-version-540675 kubelet[663]: E0408 19:34:30.964250     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.090502 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:36 old-k8s-version-540675 kubelet[663]: E0408 19:34:36.963651     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.090837 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:44 old-k8s-version-540675 kubelet[663]: E0408 19:34:44.963713     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.091022 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:51 old-k8s-version-540675 kubelet[663]: E0408 19:34:51.963845     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.091355 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:55 old-k8s-version-540675 kubelet[663]: E0408 19:34:55.963188     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.091542 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:02 old-k8s-version-540675 kubelet[663]: E0408 19:35:02.969106     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.091869 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:10 old-k8s-version-540675 kubelet[663]: E0408 19:35:10.966162     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.092056 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:15 old-k8s-version-540675 kubelet[663]: E0408 19:35:15.963675     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.092384 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: E0408 19:35:23.963177     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.092568 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:29 old-k8s-version-540675 kubelet[663]: E0408 19:35:29.963846     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.092895 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: E0408 19:35:38.967622     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.093078 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:41 old-k8s-version-540675 kubelet[663]: E0408 19:35:41.963745     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.093406 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:51 old-k8s-version-540675 kubelet[663]: E0408 19:35:51.963484     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	I0408 19:35:52.093415 1040091 logs.go:123] Gathering logs for kube-scheduler [edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90] ...
	I0408 19:35:52.093430 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90"
	I0408 19:35:52.134609 1040091 logs.go:123] Gathering logs for kube-proxy [63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070] ...
	I0408 19:35:52.134636 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070"
	I0408 19:35:52.185233 1040091 logs.go:123] Gathering logs for kube-controller-manager [f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46] ...
	I0408 19:35:52.185259 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46"
	I0408 19:35:52.258111 1040091 logs.go:123] Gathering logs for storage-provisioner [15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19] ...
	I0408 19:35:52.258141 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19"
	I0408 19:35:52.304897 1040091 logs.go:123] Gathering logs for storage-provisioner [0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63] ...
	I0408 19:35:52.304924 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63"
	I0408 19:35:52.359635 1040091 out.go:304] Setting ErrFile to fd 2...
	I0408 19:35:52.359661 1040091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 19:35:52.359740 1040091 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 19:35:52.359755 1040091 out.go:239]   Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: E0408 19:35:23.963177     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	  Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: E0408 19:35:23.963177     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.359877 1040091 out.go:239]   Apr 08 19:35:29 old-k8s-version-540675 kubelet[663]: E0408 19:35:29.963846     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 08 19:35:29 old-k8s-version-540675 kubelet[663]: E0408 19:35:29.963846     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.359908 1040091 out.go:239]   Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: E0408 19:35:38.967622     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	  Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: E0408 19:35:38.967622     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.359920 1040091 out.go:239]   Apr 08 19:35:41 old-k8s-version-540675 kubelet[663]: E0408 19:35:41.963745     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 08 19:35:41 old-k8s-version-540675 kubelet[663]: E0408 19:35:41.963745     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.359931 1040091 out.go:239]   Apr 08 19:35:51 old-k8s-version-540675 kubelet[663]: E0408 19:35:51.963484     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	  Apr 08 19:35:51 old-k8s-version-540675 kubelet[663]: E0408 19:35:51.963484     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	I0408 19:35:52.359938 1040091 out.go:304] Setting ErrFile to fd 2...
	I0408 19:35:52.359945 1040091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:36:02.361001 1040091 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0408 19:36:02.518753 1040091 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0408 19:36:02.521014 1040091 out.go:177] 
	W0408 19:36:02.523038 1040091 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0408 19:36:02.523081 1040091 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0408 19:36:02.523099 1040091 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0408 19:36:02.523105 1040091 out.go:239] * 
	* 
	W0408 19:36:02.524124 1040091 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 19:36:02.526444 1040091 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-540675 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-540675
helpers_test.go:235: (dbg) docker inspect old-k8s-version-540675:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ddf6d484e9d329b2c219e319ea1e25ecb2cd977a03c74befffdd15dcc4914bdf",
	        "Created": "2024-04-08T19:26:35.970887216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1040285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-08T19:29:46.383480233Z",
	            "FinishedAt": "2024-04-08T19:29:45.234500681Z"
	        },
	        "Image": "sha256:8071b9dd214010e53befdd8360b63c717c30e750b027ce9f279f5c79f4d48a44",
	        "ResolvConfPath": "/var/lib/docker/containers/ddf6d484e9d329b2c219e319ea1e25ecb2cd977a03c74befffdd15dcc4914bdf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ddf6d484e9d329b2c219e319ea1e25ecb2cd977a03c74befffdd15dcc4914bdf/hostname",
	        "HostsPath": "/var/lib/docker/containers/ddf6d484e9d329b2c219e319ea1e25ecb2cd977a03c74befffdd15dcc4914bdf/hosts",
	        "LogPath": "/var/lib/docker/containers/ddf6d484e9d329b2c219e319ea1e25ecb2cd977a03c74befffdd15dcc4914bdf/ddf6d484e9d329b2c219e319ea1e25ecb2cd977a03c74befffdd15dcc4914bdf-json.log",
	        "Name": "/old-k8s-version-540675",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-540675:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-540675",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b123d4fd8063848c7d4ad7d1afa367a650fec1bdb9e0054190fd1563c07beab2-init/diff:/var/lib/docker/overlay2/56d7d8514c63dab1b3fb6d26c1f92815f34275e9a0ff6f17f417c17da312f7ae/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b123d4fd8063848c7d4ad7d1afa367a650fec1bdb9e0054190fd1563c07beab2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b123d4fd8063848c7d4ad7d1afa367a650fec1bdb9e0054190fd1563c07beab2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b123d4fd8063848c7d4ad7d1afa367a650fec1bdb9e0054190fd1563c07beab2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-540675",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-540675/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-540675",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-540675",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-540675",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8fb7dc3d98c32be1cfb7d1ca5aec9349ff9685e0ff37a7f9b3eb2f9e3d2de8f",
	            "SandboxKey": "/var/run/docker/netns/f8fb7dc3d98c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33860"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33859"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33856"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33858"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33857"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-540675": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "b99d1e5a6f3548e46088da32a58141d34d809ad0d1c4b02ef0a06a68e2401bd8",
	                    "EndpointID": "1e40435c7900440694fc16d283456a2d7c9c9889efcba5a85eb1560b56a9e544",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-540675",
	                        "ddf6d484e9d3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-540675 -n old-k8s-version-540675
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-540675 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-540675 logs -n 25: (2.416764792s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p cert-expiration-022201                              | cert-expiration-022201       | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:25 UTC | 08 Apr 24 19:25 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	| ssh     | force-systemd-env-749206                               | force-systemd-env-749206     | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:25 UTC | 08 Apr 24 19:25 UTC |
	|         | ssh cat                                                |                              |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| delete  | -p force-systemd-env-749206                            | force-systemd-env-749206     | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:25 UTC | 08 Apr 24 19:25 UTC |
	| start   | -p cert-options-427136                                 | cert-options-427136          | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:25 UTC | 08 Apr 24 19:26 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |                |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |                |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |                |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	| ssh     | cert-options-427136 ssh                                | cert-options-427136          | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:26 UTC | 08 Apr 24 19:26 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |                |                     |                     |
	| ssh     | -p cert-options-427136 -- sudo                         | cert-options-427136          | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:26 UTC | 08 Apr 24 19:26 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |                |                     |                     |
	| delete  | -p cert-options-427136                                 | cert-options-427136          | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:26 UTC | 08 Apr 24 19:26 UTC |
	| start   | -p old-k8s-version-540675                              | old-k8s-version-540675       | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:26 UTC | 08 Apr 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| start   | -p cert-expiration-022201                              | cert-expiration-022201       | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:28 UTC | 08 Apr 24 19:29 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-022201                              | cert-expiration-022201       | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:29 UTC | 08 Apr 24 19:29 UTC |
	| start   | -p                                                     | default-k8s-diff-port-537054 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:29 UTC | 08 Apr 24 19:30 UTC |
	|         | default-k8s-diff-port-537054                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-540675        | old-k8s-version-540675       | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:29 UTC | 08 Apr 24 19:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-540675                              | old-k8s-version-540675       | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:29 UTC | 08 Apr 24 19:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-540675             | old-k8s-version-540675       | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:29 UTC | 08 Apr 24 19:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-540675                              | old-k8s-version-540675       | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-537054  | default-k8s-diff-port-537054 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:30 UTC | 08 Apr 24 19:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-537054 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:30 UTC | 08 Apr 24 19:30 UTC |
	|         | default-k8s-diff-port-537054                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-537054       | default-k8s-diff-port-537054 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:30 UTC | 08 Apr 24 19:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-537054 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:30 UTC | 08 Apr 24 19:34 UTC |
	|         | default-k8s-diff-port-537054                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| image   | default-k8s-diff-port-537054                           | default-k8s-diff-port-537054 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:35 UTC | 08 Apr 24 19:35 UTC |
	|         | image list --format=json                               |                              |         |                |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-537054 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:35 UTC | 08 Apr 24 19:35 UTC |
	|         | default-k8s-diff-port-537054                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-537054 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:35 UTC | 08 Apr 24 19:35 UTC |
	|         | default-k8s-diff-port-537054                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-537054 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:35 UTC | 08 Apr 24 19:35 UTC |
	|         | default-k8s-diff-port-537054                           |                              |         |                |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-537054 | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:35 UTC | 08 Apr 24 19:35 UTC |
	|         | default-k8s-diff-port-537054                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-160920                                  | embed-certs-160920           | jenkins | v1.33.0-beta.0 | 08 Apr 24 19:35 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 19:35:15
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 19:35:15.766067 1049784 out.go:291] Setting OutFile to fd 1 ...
	I0408 19:35:15.766208 1049784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:35:15.766226 1049784 out.go:304] Setting ErrFile to fd 2...
	I0408 19:35:15.766233 1049784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:35:15.766621 1049784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 19:35:15.767196 1049784 out.go:298] Setting JSON to false
	I0408 19:35:15.768321 1049784 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15460,"bootTime":1712589456,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0408 19:35:15.768426 1049784 start.go:139] virtualization:  
	I0408 19:35:15.771599 1049784 out.go:177] * [embed-certs-160920] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0408 19:35:15.773953 1049784 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 19:35:15.774152 1049784 notify.go:220] Checking for updates...
	I0408 19:35:15.775752 1049784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:35:15.777787 1049784 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 19:35:15.779861 1049784 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	I0408 19:35:15.781674 1049784 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0408 19:35:15.784035 1049784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:35:12.704106 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:15.203764 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:15.786616 1049784 config.go:182] Loaded profile config "old-k8s-version-540675": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0408 19:35:15.786754 1049784 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 19:35:15.807181 1049784 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0408 19:35:15.807303 1049784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 19:35:15.886519 1049784 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-08 19:35:15.876707263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 19:35:15.886627 1049784 docker.go:295] overlay module found
	I0408 19:35:15.889048 1049784 out.go:177] * Using the docker driver based on user configuration
	I0408 19:35:15.890926 1049784 start.go:297] selected driver: docker
	I0408 19:35:15.890946 1049784 start.go:901] validating driver "docker" against <nil>
	I0408 19:35:15.890962 1049784 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:35:15.891621 1049784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 19:35:15.942263 1049784 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-08 19:35:15.933505351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 19:35:15.942433 1049784 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 19:35:15.942672 1049784 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 19:35:15.944608 1049784 out.go:177] * Using Docker driver with root privileges
	I0408 19:35:15.946221 1049784 cni.go:84] Creating CNI manager for ""
	I0408 19:35:15.946243 1049784 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 19:35:15.946254 1049784 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 19:35:15.946339 1049784 start.go:340] cluster config:
	{Name:embed-certs-160920 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-160920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:35:15.948278 1049784 out.go:177] * Starting "embed-certs-160920" primary control-plane node in "embed-certs-160920" cluster
	I0408 19:35:15.949972 1049784 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0408 19:35:15.952039 1049784 out.go:177] * Pulling base image v0.0.43-1712593525-18585 ...
	I0408 19:35:15.953791 1049784 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 19:35:15.953854 1049784 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0408 19:35:15.953865 1049784 cache.go:56] Caching tarball of preloaded images
	I0408 19:35:15.953876 1049784 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local docker daemon
	I0408 19:35:15.953961 1049784 preload.go:173] Found /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 19:35:15.953971 1049784 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0408 19:35:15.954243 1049784 profile.go:143] Saving config to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/config.json ...
	I0408 19:35:15.954271 1049784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/config.json: {Name:mk5b6dc50db225a4a44f9fa26a8d9811af0fa38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:35:15.970312 1049784 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local docker daemon, skipping pull
	I0408 19:35:15.970338 1049784 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd exists in daemon, skipping load
	I0408 19:35:15.970356 1049784 cache.go:194] Successfully downloaded all kic artifacts
	I0408 19:35:15.970384 1049784 start.go:360] acquireMachinesLock for embed-certs-160920: {Name:mka9893f4bb17c9e3582863ded36d4d0d17f21f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:35:15.970956 1049784 start.go:364] duration metric: took 547.022µs to acquireMachinesLock for "embed-certs-160920"
	I0408 19:35:15.970992 1049784 start.go:93] Provisioning new machine with config: &{Name:embed-certs-160920 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-160920 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0408 19:35:15.971077 1049784 start.go:125] createHost starting for "" (driver="docker")
	I0408 19:35:15.973152 1049784 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0408 19:35:15.973393 1049784 start.go:159] libmachine.API.Create for "embed-certs-160920" (driver="docker")
	I0408 19:35:15.973429 1049784 client.go:168] LocalClient.Create starting
	I0408 19:35:15.973494 1049784 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem
	I0408 19:35:15.973586 1049784 main.go:141] libmachine: Decoding PEM data...
	I0408 19:35:15.973607 1049784 main.go:141] libmachine: Parsing certificate...
	I0408 19:35:15.973701 1049784 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem
	I0408 19:35:15.973734 1049784 main.go:141] libmachine: Decoding PEM data...
	I0408 19:35:15.973747 1049784 main.go:141] libmachine: Parsing certificate...
	I0408 19:35:15.974239 1049784 cli_runner.go:164] Run: docker network inspect embed-certs-160920 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0408 19:35:15.987865 1049784 cli_runner.go:211] docker network inspect embed-certs-160920 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0408 19:35:15.987950 1049784 network_create.go:281] running [docker network inspect embed-certs-160920] to gather additional debugging logs...
	I0408 19:35:15.987968 1049784 cli_runner.go:164] Run: docker network inspect embed-certs-160920
	W0408 19:35:16.000649 1049784 cli_runner.go:211] docker network inspect embed-certs-160920 returned with exit code 1
	I0408 19:35:16.000684 1049784 network_create.go:284] error running [docker network inspect embed-certs-160920]: docker network inspect embed-certs-160920: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-160920 not found
	I0408 19:35:16.000698 1049784 network_create.go:286] output of [docker network inspect embed-certs-160920]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-160920 not found
	
	** /stderr **
	I0408 19:35:16.000846 1049784 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0408 19:35:16.018120 1049784 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a63f63e60f29 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:71:a8:58:39} reservation:<nil>}
	I0408 19:35:16.018593 1049784 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b78628e149cf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:33:39:ed:9d} reservation:<nil>}
	I0408 19:35:16.019297 1049784 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-635750c08010 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:e3:10:6f:81} reservation:<nil>}
	I0408 19:35:16.019937 1049784 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b99d1e5a6f35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:33:53:f6:32} reservation:<nil>}
	I0408 19:35:16.020507 1049784 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002581fa0}
	I0408 19:35:16.020543 1049784 network_create.go:124] attempt to create docker network embed-certs-160920 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0408 19:35:16.020624 1049784 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-160920 embed-certs-160920
	I0408 19:35:16.101036 1049784 network_create.go:108] docker network embed-certs-160920 192.168.85.0/24 created
	I0408 19:35:16.101070 1049784 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-160920" container
	I0408 19:35:16.101157 1049784 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0408 19:35:16.114053 1049784 cli_runner.go:164] Run: docker volume create embed-certs-160920 --label name.minikube.sigs.k8s.io=embed-certs-160920 --label created_by.minikube.sigs.k8s.io=true
	I0408 19:35:16.127935 1049784 oci.go:103] Successfully created a docker volume embed-certs-160920
	I0408 19:35:16.128028 1049784 cli_runner.go:164] Run: docker run --rm --name embed-certs-160920-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-160920 --entrypoint /usr/bin/test -v embed-certs-160920:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd -d /var/lib
	I0408 19:35:16.696080 1049784 oci.go:107] Successfully prepared a docker volume embed-certs-160920
	I0408 19:35:16.696125 1049784 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 19:35:16.696145 1049784 kic.go:194] Starting extracting preloaded images to volume ...
	I0408 19:35:16.696235 1049784 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-160920:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd -I lz4 -xf /preloaded.tar -C /extractDir
	I0408 19:35:17.203860 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:19.209052 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:21.515850 1049784 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-160920:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd -I lz4 -xf /preloaded.tar -C /extractDir: (4.819558145s)
	I0408 19:35:21.515881 1049784 kic.go:203] duration metric: took 4.819732804s to extract preloaded images to volume ...
	W0408 19:35:21.516013 1049784 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0408 19:35:21.516145 1049784 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0408 19:35:21.569547 1049784 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-160920 --name embed-certs-160920 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-160920 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-160920 --network embed-certs-160920 --ip 192.168.85.2 --volume embed-certs-160920:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd
	I0408 19:35:21.868769 1049784 cli_runner.go:164] Run: docker container inspect embed-certs-160920 --format={{.State.Running}}
	I0408 19:35:21.891289 1049784 cli_runner.go:164] Run: docker container inspect embed-certs-160920 --format={{.State.Status}}
	I0408 19:35:21.911471 1049784 cli_runner.go:164] Run: docker exec embed-certs-160920 stat /var/lib/dpkg/alternatives/iptables
	I0408 19:35:21.997341 1049784 oci.go:144] the created container "embed-certs-160920" has a running status.
	I0408 19:35:21.997369 1049784 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18585-838483/.minikube/machines/embed-certs-160920/id_rsa...
	I0408 19:35:22.378266 1049784 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18585-838483/.minikube/machines/embed-certs-160920/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0408 19:35:22.409824 1049784 cli_runner.go:164] Run: docker container inspect embed-certs-160920 --format={{.State.Status}}
	I0408 19:35:22.434130 1049784 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0408 19:35:22.434165 1049784 kic_runner.go:114] Args: [docker exec --privileged embed-certs-160920 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0408 19:35:22.516678 1049784 cli_runner.go:164] Run: docker container inspect embed-certs-160920 --format={{.State.Status}}
	I0408 19:35:22.539615 1049784 machine.go:94] provisionDockerMachine start ...
	I0408 19:35:22.539796 1049784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160920
	I0408 19:35:22.566276 1049784 main.go:141] libmachine: Using SSH client type: native
	I0408 19:35:22.566582 1049784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33870 <nil> <nil>}
	I0408 19:35:22.566592 1049784 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 19:35:22.567346 1049784 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42466->127.0.0.1:33870: read: connection reset by peer
	I0408 19:35:25.709467 1049784 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-160920
	
	I0408 19:35:25.709509 1049784 ubuntu.go:169] provisioning hostname "embed-certs-160920"
	I0408 19:35:25.709590 1049784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160920
	I0408 19:35:25.725860 1049784 main.go:141] libmachine: Using SSH client type: native
	I0408 19:35:25.726325 1049784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33870 <nil> <nil>}
	I0408 19:35:25.726349 1049784 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-160920 && echo "embed-certs-160920" | sudo tee /etc/hostname
	I0408 19:35:21.709399 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:24.203105 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:25.878813 1049784 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-160920
	
	I0408 19:35:25.878890 1049784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160920
	I0408 19:35:25.894918 1049784 main.go:141] libmachine: Using SSH client type: native
	I0408 19:35:25.895164 1049784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33870 <nil> <nil>}
	I0408 19:35:25.895186 1049784 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-160920' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-160920/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-160920' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:35:26.038252 1049784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:35:26.038342 1049784 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18585-838483/.minikube CaCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18585-838483/.minikube}
	I0408 19:35:26.038398 1049784 ubuntu.go:177] setting up certificates
	I0408 19:35:26.038432 1049784 provision.go:84] configureAuth start
	I0408 19:35:26.038514 1049784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160920
	I0408 19:35:26.058810 1049784 provision.go:143] copyHostCerts
	I0408 19:35:26.058876 1049784 exec_runner.go:144] found /home/jenkins/minikube-integration/18585-838483/.minikube/ca.pem, removing ...
	I0408 19:35:26.058885 1049784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18585-838483/.minikube/ca.pem
	I0408 19:35:26.058960 1049784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/ca.pem (1082 bytes)
	I0408 19:35:26.059054 1049784 exec_runner.go:144] found /home/jenkins/minikube-integration/18585-838483/.minikube/cert.pem, removing ...
	I0408 19:35:26.059059 1049784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18585-838483/.minikube/cert.pem
	I0408 19:35:26.059088 1049784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/cert.pem (1123 bytes)
	I0408 19:35:26.059147 1049784 exec_runner.go:144] found /home/jenkins/minikube-integration/18585-838483/.minikube/key.pem, removing ...
	I0408 19:35:26.059152 1049784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18585-838483/.minikube/key.pem
	I0408 19:35:26.059177 1049784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18585-838483/.minikube/key.pem (1675 bytes)
	I0408 19:35:26.059229 1049784 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem org=jenkins.embed-certs-160920 san=[127.0.0.1 192.168.85.2 embed-certs-160920 localhost minikube]
	I0408 19:35:26.413507 1049784 provision.go:177] copyRemoteCerts
	I0408 19:35:26.413590 1049784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:35:26.413674 1049784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160920
	I0408 19:35:26.429149 1049784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/embed-certs-160920/id_rsa Username:docker}
	I0408 19:35:26.527382 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:35:26.553365 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 19:35:26.578991 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 19:35:26.605878 1049784 provision.go:87] duration metric: took 567.416871ms to configureAuth
	I0408 19:35:26.605914 1049784 ubuntu.go:193] setting minikube options for container-runtime
	I0408 19:35:26.606162 1049784 config.go:182] Loaded profile config "embed-certs-160920": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:35:26.606179 1049784 machine.go:97] duration metric: took 4.066545956s to provisionDockerMachine
	I0408 19:35:26.606187 1049784 client.go:171] duration metric: took 10.632749752s to LocalClient.Create
	I0408 19:35:26.606213 1049784 start.go:167] duration metric: took 10.632817418s to libmachine.API.Create "embed-certs-160920"
	I0408 19:35:26.606253 1049784 start.go:293] postStartSetup for "embed-certs-160920" (driver="docker")
	I0408 19:35:26.606263 1049784 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:35:26.606333 1049784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:35:26.606384 1049784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160920
	I0408 19:35:26.626570 1049784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/embed-certs-160920/id_rsa Username:docker}
	I0408 19:35:26.727353 1049784 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:35:26.730612 1049784 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0408 19:35:26.730648 1049784 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0408 19:35:26.730659 1049784 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0408 19:35:26.730666 1049784 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0408 19:35:26.730681 1049784 filesync.go:126] Scanning /home/jenkins/minikube-integration/18585-838483/.minikube/addons for local assets ...
	I0408 19:35:26.730742 1049784 filesync.go:126] Scanning /home/jenkins/minikube-integration/18585-838483/.minikube/files for local assets ...
	I0408 19:35:26.730830 1049784 filesync.go:149] local asset: /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem -> 8439002.pem in /etc/ssl/certs
	I0408 19:35:26.730940 1049784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:35:26.739903 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem --> /etc/ssl/certs/8439002.pem (1708 bytes)
	I0408 19:35:26.766153 1049784 start.go:296] duration metric: took 159.88587ms for postStartSetup
	I0408 19:35:26.766514 1049784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160920
	I0408 19:35:26.781281 1049784 profile.go:143] Saving config to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/config.json ...
	I0408 19:35:26.781569 1049784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 19:35:26.781634 1049784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160920
	I0408 19:35:26.797470 1049784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/embed-certs-160920/id_rsa Username:docker}
	I0408 19:35:26.895303 1049784 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0408 19:35:26.899883 1049784 start.go:128] duration metric: took 10.92879092s to createHost
	I0408 19:35:26.899907 1049784 start.go:83] releasing machines lock for "embed-certs-160920", held for 10.928935342s
	I0408 19:35:26.899978 1049784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160920
	I0408 19:35:26.917062 1049784 ssh_runner.go:195] Run: cat /version.json
	I0408 19:35:26.917135 1049784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:35:26.917153 1049784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160920
	I0408 19:35:26.917216 1049784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160920
	I0408 19:35:26.933992 1049784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/embed-certs-160920/id_rsa Username:docker}
	I0408 19:35:26.944817 1049784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/embed-certs-160920/id_rsa Username:docker}
	I0408 19:35:27.034966 1049784 ssh_runner.go:195] Run: systemctl --version
	I0408 19:35:27.170444 1049784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 19:35:27.174860 1049784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0408 19:35:27.204112 1049784 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0408 19:35:27.204203 1049784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:35:27.235641 1049784 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0408 19:35:27.235721 1049784 start.go:494] detecting cgroup driver to use...
	I0408 19:35:27.235763 1049784 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0408 19:35:27.235826 1049784 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 19:35:27.249167 1049784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 19:35:27.261111 1049784 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:35:27.261177 1049784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:35:27.282406 1049784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:35:27.297817 1049784 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:35:27.390235 1049784 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:35:27.478393 1049784 docker.go:233] disabling docker service ...
	I0408 19:35:27.478520 1049784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:35:27.503530 1049784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:35:27.515986 1049784 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:35:27.620358 1049784 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:35:27.729863 1049784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:35:27.742992 1049784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:35:27.759888 1049784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0408 19:35:27.770842 1049784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 19:35:27.781961 1049784 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 19:35:27.782111 1049784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 19:35:27.793788 1049784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 19:35:27.803894 1049784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 19:35:27.813928 1049784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 19:35:27.826681 1049784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:35:27.838352 1049784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 19:35:27.849121 1049784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 19:35:27.859585 1049784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 19:35:27.870256 1049784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:35:27.879883 1049784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:35:27.888402 1049784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:35:27.976643 1049784 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 19:35:28.134620 1049784 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0408 19:35:28.134735 1049784 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0408 19:35:28.139532 1049784 start.go:562] Will wait 60s for crictl version
	I0408 19:35:28.139648 1049784 ssh_runner.go:195] Run: which crictl
	I0408 19:35:28.143685 1049784 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:35:28.181487 1049784 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0408 19:35:28.181603 1049784 ssh_runner.go:195] Run: containerd --version
	I0408 19:35:28.209524 1049784 ssh_runner.go:195] Run: containerd --version
	I0408 19:35:28.236448 1049784 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0408 19:35:28.238360 1049784 cli_runner.go:164] Run: docker network inspect embed-certs-160920 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0408 19:35:28.251614 1049784 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0408 19:35:28.255334 1049784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:35:28.266504 1049784 kubeadm.go:877] updating cluster {Name:embed-certs-160920 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-160920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:35:28.266635 1049784 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 19:35:28.266702 1049784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:35:28.304649 1049784 containerd.go:627] all images are preloaded for containerd runtime.
	I0408 19:35:28.304674 1049784 containerd.go:534] Images already preloaded, skipping extraction
	I0408 19:35:28.304743 1049784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:35:28.342688 1049784 containerd.go:627] all images are preloaded for containerd runtime.
	I0408 19:35:28.342710 1049784 cache_images.go:84] Images are preloaded, skipping loading
	I0408 19:35:28.342718 1049784 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.29.3 containerd true true} ...
	I0408 19:35:28.342815 1049784 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-160920 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-160920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:35:28.342881 1049784 ssh_runner.go:195] Run: sudo crictl info
	I0408 19:35:28.388604 1049784 cni.go:84] Creating CNI manager for ""
	I0408 19:35:28.388624 1049784 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 19:35:28.388634 1049784 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 19:35:28.388655 1049784 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-160920 NodeName:embed-certs-160920 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 19:35:28.388783 1049784 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-160920"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:35:28.388848 1049784 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 19:35:28.399970 1049784 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:35:28.400047 1049784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:35:28.409313 1049784 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0408 19:35:28.428611 1049784 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:35:28.447377 1049784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0408 19:35:28.465435 1049784 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0408 19:35:28.469276 1049784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:35:28.479825 1049784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:35:28.565580 1049784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:35:28.583139 1049784 certs.go:68] Setting up /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920 for IP: 192.168.85.2
	I0408 19:35:28.583199 1049784 certs.go:194] generating shared ca certs ...
	I0408 19:35:28.583229 1049784 certs.go:226] acquiring lock for ca certs: {Name:mkee58842a3256e0a530a93e9e38afd9941f0741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:35:28.583389 1049784 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18585-838483/.minikube/ca.key
	I0408 19:35:28.583469 1049784 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.key
	I0408 19:35:28.583499 1049784 certs.go:256] generating profile certs ...
	I0408 19:35:28.583571 1049784 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/client.key
	I0408 19:35:28.583605 1049784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/client.crt with IP's: []
	I0408 19:35:28.971375 1049784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/client.crt ...
	I0408 19:35:28.971407 1049784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/client.crt: {Name:mk1683174e57f47893fc8495fc34a973a03a4559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:35:28.971599 1049784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/client.key ...
	I0408 19:35:28.971612 1049784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/client.key: {Name:mkeeeb492afa40e2e264fd1e24600a5a7feffc0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:35:28.972201 1049784 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.key.13c8c765
	I0408 19:35:28.972225 1049784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.crt.13c8c765 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0408 19:35:29.733192 1049784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.crt.13c8c765 ...
	I0408 19:35:29.733227 1049784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.crt.13c8c765: {Name:mk144c1c9a88ccdb5ac7dcdce9d0ca5d5fa24dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:35:29.734092 1049784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.key.13c8c765 ...
	I0408 19:35:29.734122 1049784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.key.13c8c765: {Name:mk6caa8208053e1ba62fc2a6d764ba0c7dd89d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:35:29.734224 1049784 certs.go:381] copying /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.crt.13c8c765 -> /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.crt
	I0408 19:35:29.734309 1049784 certs.go:385] copying /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.key.13c8c765 -> /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.key
	I0408 19:35:29.734369 1049784 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/proxy-client.key
	I0408 19:35:29.734387 1049784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/proxy-client.crt with IP's: []
	I0408 19:35:30.191411 1049784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/proxy-client.crt ...
	I0408 19:35:30.191446 1049784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/proxy-client.crt: {Name:mk5e471940ccdfcdfcb3f4e0896ea646237ff9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:35:30.191675 1049784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/proxy-client.key ...
	I0408 19:35:30.191692 1049784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/proxy-client.key: {Name:mk20a793aac02268203d83b9429b0b87aebd86c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:35:30.192548 1049784 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/843900.pem (1338 bytes)
	W0408 19:35:30.192598 1049784 certs.go:480] ignoring /home/jenkins/minikube-integration/18585-838483/.minikube/certs/843900_empty.pem, impossibly tiny 0 bytes
	I0408 19:35:30.192619 1049784 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:35:30.192651 1049784 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:35:30.192679 1049784 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:35:30.192703 1049784 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/certs/key.pem (1675 bytes)
	I0408 19:35:30.192754 1049784 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem (1708 bytes)
	I0408 19:35:30.193500 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:35:30.231043 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:35:30.257187 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:35:30.292413 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 19:35:30.319185 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0408 19:35:30.343981 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 19:35:30.368170 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:35:30.393549 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/embed-certs-160920/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 19:35:30.424810 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/certs/843900.pem --> /usr/share/ca-certificates/843900.pem (1338 bytes)
	I0408 19:35:30.451252 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/ssl/certs/8439002.pem --> /usr/share/ca-certificates/8439002.pem (1708 bytes)
	I0408 19:35:30.476461 1049784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-838483/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:35:30.501538 1049784 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 19:35:30.520180 1049784 ssh_runner.go:195] Run: openssl version
	I0408 19:35:30.527274 1049784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/843900.pem && ln -fs /usr/share/ca-certificates/843900.pem /etc/ssl/certs/843900.pem"
	I0408 19:35:30.537625 1049784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/843900.pem
	I0408 19:35:30.541397 1049784 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:50 /usr/share/ca-certificates/843900.pem
	I0408 19:35:30.541466 1049784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/843900.pem
	I0408 19:35:30.548821 1049784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/843900.pem /etc/ssl/certs/51391683.0"
	I0408 19:35:30.558724 1049784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8439002.pem && ln -fs /usr/share/ca-certificates/8439002.pem /etc/ssl/certs/8439002.pem"
	I0408 19:35:30.568872 1049784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8439002.pem
	I0408 19:35:30.572714 1049784 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:50 /usr/share/ca-certificates/8439002.pem
	I0408 19:35:30.572780 1049784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8439002.pem
	I0408 19:35:30.579775 1049784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8439002.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:35:30.589513 1049784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:35:30.599247 1049784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:35:30.602769 1049784 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:35:30.602843 1049784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:35:30.610158 1049784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:35:30.620372 1049784 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:35:30.623829 1049784 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 19:35:30.623918 1049784 kubeadm.go:391] StartCluster: {Name:embed-certs-160920 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-160920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:35:30.624012 1049784 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0408 19:35:30.624078 1049784 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:35:30.663622 1049784 cri.go:89] found id: ""
	I0408 19:35:30.663735 1049784 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 19:35:30.672655 1049784 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 19:35:30.681977 1049784 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0408 19:35:30.682077 1049784 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:35:30.691854 1049784 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:35:30.691876 1049784 kubeadm.go:156] found existing configuration files:
	
	I0408 19:35:30.691956 1049784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:35:30.701972 1049784 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:35:30.702072 1049784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:35:30.710749 1049784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:35:30.720142 1049784 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:35:30.720243 1049784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:35:30.729232 1049784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:35:30.739691 1049784 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:35:30.739778 1049784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:35:30.748391 1049784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:35:30.757245 1049784 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:35:30.757363 1049784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:35:30.765818 1049784 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0408 19:35:26.210835 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:28.703832 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:30.704470 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:30.816496 1049784 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 19:35:30.816880 1049784 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 19:35:30.879802 1049784 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0408 19:35:30.879902 1049784 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0408 19:35:30.879959 1049784 kubeadm.go:309] OS: Linux
	I0408 19:35:30.880033 1049784 kubeadm.go:309] CGROUPS_CPU: enabled
	I0408 19:35:30.880107 1049784 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0408 19:35:30.880171 1049784 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0408 19:35:30.880247 1049784 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0408 19:35:30.880317 1049784 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0408 19:35:30.880388 1049784 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0408 19:35:30.880454 1049784 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0408 19:35:30.880531 1049784 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0408 19:35:30.880607 1049784 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0408 19:35:30.958625 1049784 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 19:35:30.958807 1049784 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 19:35:30.958946 1049784 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 19:35:31.240131 1049784 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 19:35:31.242359 1049784 out.go:204]   - Generating certificates and keys ...
	I0408 19:35:31.242474 1049784 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 19:35:31.242665 1049784 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 19:35:32.164380 1049784 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 19:35:32.402398 1049784 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0408 19:35:33.178885 1049784 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0408 19:35:33.589876 1049784 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0408 19:35:33.889309 1049784 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0408 19:35:33.889481 1049784 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [embed-certs-160920 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0408 19:35:34.615377 1049784 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0408 19:35:34.615728 1049784 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-160920 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0408 19:35:34.912721 1049784 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 19:35:35.157541 1049784 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 19:35:35.434079 1049784 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0408 19:35:35.434720 1049784 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 19:35:33.210611 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:35.703183 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:35.976688 1049784 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 19:35:36.625534 1049784 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 19:35:37.960165 1049784 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 19:35:38.384223 1049784 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 19:35:39.103514 1049784 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 19:35:39.103614 1049784 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 19:35:39.107684 1049784 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 19:35:39.110378 1049784 out.go:204]   - Booting up control plane ...
	I0408 19:35:39.110491 1049784 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 19:35:39.110569 1049784 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 19:35:39.110633 1049784 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 19:35:39.128701 1049784 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 19:35:39.130221 1049784 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 19:35:39.130775 1049784 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 19:35:39.266491 1049784 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 19:35:37.703715 1040091 pod_ready.go:102] pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace has status "Ready":"False"
	I0408 19:35:38.204719 1040091 pod_ready.go:81] duration metric: took 4m0.008393212s for pod "metrics-server-9975d5f86-fvr87" in "kube-system" namespace to be "Ready" ...
	E0408 19:35:38.204742 1040091 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0408 19:35:38.204751 1040091 pod_ready.go:38] duration metric: took 5m28.796338351s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 19:35:38.204765 1040091 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:35:38.204792 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:35:38.204853 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:35:38.260969 1040091 cri.go:89] found id: "9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6"
	I0408 19:35:38.261043 1040091 cri.go:89] found id: "b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061"
	I0408 19:35:38.261071 1040091 cri.go:89] found id: ""
	I0408 19:35:38.261091 1040091 logs.go:276] 2 containers: [9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6 b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061]
	I0408 19:35:38.261172 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.265489 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.269691 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0408 19:35:38.269766 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:35:38.338094 1040091 cri.go:89] found id: "7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3"
	I0408 19:35:38.338113 1040091 cri.go:89] found id: "db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a"
	I0408 19:35:38.338118 1040091 cri.go:89] found id: ""
	I0408 19:35:38.338125 1040091 logs.go:276] 2 containers: [7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3 db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a]
	I0408 19:35:38.338179 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.342136 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.346077 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0408 19:35:38.346192 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:35:38.400613 1040091 cri.go:89] found id: "7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2"
	I0408 19:35:38.400688 1040091 cri.go:89] found id: "3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892"
	I0408 19:35:38.400695 1040091 cri.go:89] found id: ""
	I0408 19:35:38.400702 1040091 logs.go:276] 2 containers: [7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2 3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892]
	I0408 19:35:38.400789 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.407694 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.411547 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:35:38.411673 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:35:38.467628 1040091 cri.go:89] found id: "edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90"
	I0408 19:35:38.467696 1040091 cri.go:89] found id: "a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f"
	I0408 19:35:38.467715 1040091 cri.go:89] found id: ""
	I0408 19:35:38.467737 1040091 logs.go:276] 2 containers: [edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90 a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f]
	I0408 19:35:38.467821 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.488655 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.496507 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:35:38.496686 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:35:38.554128 1040091 cri.go:89] found id: "69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf"
	I0408 19:35:38.554201 1040091 cri.go:89] found id: "63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070"
	I0408 19:35:38.554220 1040091 cri.go:89] found id: ""
	I0408 19:35:38.554240 1040091 logs.go:276] 2 containers: [69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf 63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070]
	I0408 19:35:38.554324 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.558767 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.562377 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:35:38.562508 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:35:38.624832 1040091 cri.go:89] found id: "f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46"
	I0408 19:35:38.624905 1040091 cri.go:89] found id: "172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58"
	I0408 19:35:38.624924 1040091 cri.go:89] found id: ""
	I0408 19:35:38.624942 1040091 logs.go:276] 2 containers: [f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46 172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58]
	I0408 19:35:38.625026 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.629409 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.633137 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0408 19:35:38.633271 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:35:38.681443 1040091 cri.go:89] found id: "19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93"
	I0408 19:35:38.681514 1040091 cri.go:89] found id: "544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356"
	I0408 19:35:38.681530 1040091 cri.go:89] found id: ""
	I0408 19:35:38.681564 1040091 logs.go:276] 2 containers: [19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93 544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356]
	I0408 19:35:38.681659 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.685857 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.689903 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:35:38.690078 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:35:38.742785 1040091 cri.go:89] found id: "804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940"
	I0408 19:35:38.742857 1040091 cri.go:89] found id: ""
	I0408 19:35:38.742895 1040091 logs.go:276] 1 containers: [804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940]
	I0408 19:35:38.742984 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.749020 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0408 19:35:38.749137 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 19:35:38.804860 1040091 cri.go:89] found id: "15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19"
	I0408 19:35:38.804927 1040091 cri.go:89] found id: "0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63"
	I0408 19:35:38.804946 1040091 cri.go:89] found id: ""
	I0408 19:35:38.804968 1040091 logs.go:276] 2 containers: [15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19 0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63]
	I0408 19:35:38.805052 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.809143 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:38.812859 1040091 logs.go:123] Gathering logs for dmesg ...
	I0408 19:35:38.812937 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:35:38.837235 1040091 logs.go:123] Gathering logs for coredns [7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2] ...
	I0408 19:35:38.837316 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2"
	I0408 19:35:38.888003 1040091 logs.go:123] Gathering logs for kindnet [19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93] ...
	I0408 19:35:38.888080 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93"
	I0408 19:35:38.949725 1040091 logs.go:123] Gathering logs for kindnet [544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356] ...
	I0408 19:35:38.949806 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356"
	I0408 19:35:39.046663 1040091 logs.go:123] Gathering logs for kubernetes-dashboard [804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940] ...
	I0408 19:35:39.046743 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940"
	I0408 19:35:39.102564 1040091 logs.go:123] Gathering logs for kube-controller-manager [f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46] ...
	I0408 19:35:39.102644 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46"
	I0408 19:35:39.204565 1040091 logs.go:123] Gathering logs for storage-provisioner [15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19] ...
	I0408 19:35:39.204644 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19"
	I0408 19:35:39.293176 1040091 logs.go:123] Gathering logs for kube-scheduler [a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f] ...
	I0408 19:35:39.293252 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f"
	I0408 19:35:39.354667 1040091 logs.go:123] Gathering logs for kube-proxy [69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf] ...
	I0408 19:35:39.354741 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf"
	I0408 19:35:39.407570 1040091 logs.go:123] Gathering logs for kube-proxy [63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070] ...
	I0408 19:35:39.407645 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070"
	I0408 19:35:39.462603 1040091 logs.go:123] Gathering logs for kubelet ...
	I0408 19:35:39.462679 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 19:35:39.526129 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354687     663 reflector.go:138] object-"default"/"default-token-gzsv4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gzsv4" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.526453 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354764     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.526695 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354859     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-zmg69": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-zmg69" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.526948 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.356924     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vcs78": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vcs78" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.527181 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.356979     663 reflector.go:138] object-"kube-system"/"coredns-token-w52sl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-w52sl" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.527440 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.357025     663 reflector.go:138] object-"kube-system"/"metrics-server-token-zxgqt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zxgqt" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.527671 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.357068     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.527912 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.374468     663 reflector.go:138] object-"kube-system"/"kindnet-token-h6csz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h6csz" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:39.536148 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:11 old-k8s-version-540675 kubelet[663]: E0408 19:30:11.776102     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:39.537747 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:12 old-k8s-version-540675 kubelet[663]: E0408 19:30:12.254992     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.540638 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:23 old-k8s-version-540675 kubelet[663]: E0408 19:30:23.971489     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:39.542852 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:34 old-k8s-version-540675 kubelet[663]: E0408 19:30:34.355720     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.543072 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:34 old-k8s-version-540675 kubelet[663]: E0408 19:30:34.977351     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.543423 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:35 old-k8s-version-540675 kubelet[663]: E0408 19:30:35.361567     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.543774 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:38 old-k8s-version-540675 kubelet[663]: E0408 19:30:38.654910     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.546650 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:49 old-k8s-version-540675 kubelet[663]: E0408 19:30:49.978868     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:39.547658 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:52 old-k8s-version-540675 kubelet[663]: E0408 19:30:52.422181     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.548051 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:58 old-k8s-version-540675 kubelet[663]: E0408 19:30:58.654477     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.548261 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:02 old-k8s-version-540675 kubelet[663]: E0408 19:31:02.963701     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.548608 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:09 old-k8s-version-540675 kubelet[663]: E0408 19:31:09.963237     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.548815 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:17 old-k8s-version-540675 kubelet[663]: E0408 19:31:17.964344     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.549436 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:21 old-k8s-version-540675 kubelet[663]: E0408 19:31:21.505624     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.549823 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:28 old-k8s-version-540675 kubelet[663]: E0408 19:31:28.654754     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.550041 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:29 old-k8s-version-540675 kubelet[663]: E0408 19:31:29.963597     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.550389 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:38 old-k8s-version-540675 kubelet[663]: E0408 19:31:38.963843     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.552851 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:40 old-k8s-version-540675 kubelet[663]: E0408 19:31:40.981110     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:39.553204 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:49 old-k8s-version-540675 kubelet[663]: E0408 19:31:49.963312     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.553410 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:53 old-k8s-version-540675 kubelet[663]: E0408 19:31:53.963765     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.554027 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:02 old-k8s-version-540675 kubelet[663]: E0408 19:32:02.592991     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.554234 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:07 old-k8s-version-540675 kubelet[663]: E0408 19:32:07.963579     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.554610 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:08 old-k8s-version-540675 kubelet[663]: E0408 19:32:08.654229     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.554822 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:18 old-k8s-version-540675 kubelet[663]: E0408 19:32:18.966254     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.555241 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:19 old-k8s-version-540675 kubelet[663]: E0408 19:32:19.963192     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.555460 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:29 old-k8s-version-540675 kubelet[663]: E0408 19:32:29.963578     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.555821 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:32 old-k8s-version-540675 kubelet[663]: E0408 19:32:32.963790     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.556031 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:42 old-k8s-version-540675 kubelet[663]: E0408 19:32:42.963650     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.556383 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:43 old-k8s-version-540675 kubelet[663]: E0408 19:32:43.963376     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.556593 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:53 old-k8s-version-540675 kubelet[663]: E0408 19:32:53.963583     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.556942 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:58 old-k8s-version-540675 kubelet[663]: E0408 19:32:58.964157     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.559402 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:04 old-k8s-version-540675 kubelet[663]: E0408 19:33:04.972212     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:39.559775 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:09 old-k8s-version-540675 kubelet[663]: E0408 19:33:09.963440     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.559984 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:17 old-k8s-version-540675 kubelet[663]: E0408 19:33:17.963569     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.560333 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:21 old-k8s-version-540675 kubelet[663]: E0408 19:33:21.963178     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.560555 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:29 old-k8s-version-540675 kubelet[663]: E0408 19:33:29.963894     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.561176 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:36 old-k8s-version-540675 kubelet[663]: E0408 19:33:36.797444     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.561527 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:38 old-k8s-version-540675 kubelet[663]: E0408 19:33:38.654611     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.561739 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:42 old-k8s-version-540675 kubelet[663]: E0408 19:33:42.965353     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.562098 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:52 old-k8s-version-540675 kubelet[663]: E0408 19:33:52.963651     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.562305 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:57 old-k8s-version-540675 kubelet[663]: E0408 19:33:57.963546     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.562652 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:05 old-k8s-version-540675 kubelet[663]: E0408 19:34:05.963339     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.562857 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:10 old-k8s-version-540675 kubelet[663]: E0408 19:34:10.966560     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.563206 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:16 old-k8s-version-540675 kubelet[663]: E0408 19:34:16.963733     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.563414 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:21 old-k8s-version-540675 kubelet[663]: E0408 19:34:21.963613     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.563834 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:30 old-k8s-version-540675 kubelet[663]: E0408 19:34:30.964250     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.564076 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:36 old-k8s-version-540675 kubelet[663]: E0408 19:34:36.963651     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.564426 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:44 old-k8s-version-540675 kubelet[663]: E0408 19:34:44.963713     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.564668 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:51 old-k8s-version-540675 kubelet[663]: E0408 19:34:51.963845     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.565017 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:55 old-k8s-version-540675 kubelet[663]: E0408 19:34:55.963188     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.565237 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:02 old-k8s-version-540675 kubelet[663]: E0408 19:35:02.969106     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.565591 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:10 old-k8s-version-540675 kubelet[663]: E0408 19:35:10.966162     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.565803 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:15 old-k8s-version-540675 kubelet[663]: E0408 19:35:15.963675     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.566184 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: E0408 19:35:23.963177     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:39.566391 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:29 old-k8s-version-540675 kubelet[663]: E0408 19:35:29.963846     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:39.566741 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: E0408 19:35:38.967622     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	I0408 19:35:39.566763 1040091 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:35:39.566787 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 19:35:39.775939 1040091 logs.go:123] Gathering logs for kube-apiserver [b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061] ...
	I0408 19:35:39.775971 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061"
	I0408 19:35:39.837916 1040091 logs.go:123] Gathering logs for etcd [db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a] ...
	I0408 19:35:39.837960 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a"
	I0408 19:35:39.887500 1040091 logs.go:123] Gathering logs for kube-scheduler [edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90] ...
	I0408 19:35:39.887527 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90"
	I0408 19:35:39.934653 1040091 logs.go:123] Gathering logs for storage-provisioner [0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63] ...
	I0408 19:35:39.934681 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63"
	I0408 19:35:39.981694 1040091 logs.go:123] Gathering logs for container status ...
	I0408 19:35:39.981722 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:35:40.058506 1040091 logs.go:123] Gathering logs for kube-apiserver [9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6] ...
	I0408 19:35:40.058535 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6"
	I0408 19:35:40.152070 1040091 logs.go:123] Gathering logs for etcd [7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3] ...
	I0408 19:35:40.152105 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3"
	I0408 19:35:40.226742 1040091 logs.go:123] Gathering logs for coredns [3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892] ...
	I0408 19:35:40.226821 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892"
	I0408 19:35:40.268329 1040091 logs.go:123] Gathering logs for kube-controller-manager [172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58] ...
	I0408 19:35:40.268355 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58"
	I0408 19:35:40.352663 1040091 logs.go:123] Gathering logs for containerd ...
	I0408 19:35:40.352701 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0408 19:35:40.415919 1040091 out.go:304] Setting ErrFile to fd 2...
	I0408 19:35:40.415953 1040091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 19:35:40.416027 1040091 out.go:239] X Problems detected in kubelet:
	W0408 19:35:40.416039 1040091 out.go:239]   Apr 08 19:35:10 old-k8s-version-540675 kubelet[663]: E0408 19:35:10.966162     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:40.416048 1040091 out.go:239]   Apr 08 19:35:15 old-k8s-version-540675 kubelet[663]: E0408 19:35:15.963675     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:40.416064 1040091 out.go:239]   Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: E0408 19:35:23.963177     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:40.416090 1040091 out.go:239]   Apr 08 19:35:29 old-k8s-version-540675 kubelet[663]: E0408 19:35:29.963846     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:40.416105 1040091 out.go:239]   Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: E0408 19:35:38.967622     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	I0408 19:35:40.416123 1040091 out.go:304] Setting ErrFile to fd 2...
	I0408 19:35:40.416136 1040091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:35:47.266502 1049784 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.002364 seconds
	I0408 19:35:47.297601 1049784 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 19:35:47.313796 1049784 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 19:35:47.841189 1049784 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 19:35:47.841385 1049784 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-160920 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 19:35:48.354722 1049784 kubeadm.go:309] [bootstrap-token] Using token: dejsc9.9fj79e4mdlspjibg
	I0408 19:35:48.357148 1049784 out.go:204]   - Configuring RBAC rules ...
	I0408 19:35:48.357305 1049784 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 19:35:48.363402 1049784 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 19:35:48.371440 1049784 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 19:35:48.375829 1049784 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 19:35:48.385482 1049784 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 19:35:48.389915 1049784 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 19:35:48.408014 1049784 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 19:35:48.674075 1049784 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 19:35:48.771543 1049784 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 19:35:48.773365 1049784 kubeadm.go:309] 
	I0408 19:35:48.773437 1049784 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 19:35:48.773443 1049784 kubeadm.go:309] 
	I0408 19:35:48.773517 1049784 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 19:35:48.773522 1049784 kubeadm.go:309] 
	I0408 19:35:48.773547 1049784 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 19:35:48.774028 1049784 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 19:35:48.774093 1049784 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 19:35:48.774099 1049784 kubeadm.go:309] 
	I0408 19:35:48.774151 1049784 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 19:35:48.774156 1049784 kubeadm.go:309] 
	I0408 19:35:48.774202 1049784 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 19:35:48.774207 1049784 kubeadm.go:309] 
	I0408 19:35:48.774257 1049784 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 19:35:48.774329 1049784 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 19:35:48.774398 1049784 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 19:35:48.774402 1049784 kubeadm.go:309] 
	I0408 19:35:48.774717 1049784 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 19:35:48.774798 1049784 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 19:35:48.774803 1049784 kubeadm.go:309] 
	I0408 19:35:48.775172 1049784 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token dejsc9.9fj79e4mdlspjibg \
	I0408 19:35:48.775387 1049784 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:40732441685d52f358537af2255d867bbdb5cf15cf08de16fca49474be9f966b \
	I0408 19:35:48.775578 1049784 kubeadm.go:309] 	--control-plane 
	I0408 19:35:48.775594 1049784 kubeadm.go:309] 
	I0408 19:35:48.775897 1049784 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 19:35:48.775907 1049784 kubeadm.go:309] 
	I0408 19:35:48.776173 1049784 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token dejsc9.9fj79e4mdlspjibg \
	I0408 19:35:48.776439 1049784 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:40732441685d52f358537af2255d867bbdb5cf15cf08de16fca49474be9f966b 
	I0408 19:35:48.781212 1049784 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1056-aws\n", err: exit status 1
	I0408 19:35:48.781323 1049784 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:35:48.781339 1049784 cni.go:84] Creating CNI manager for ""
	I0408 19:35:48.781346 1049784 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 19:35:48.784696 1049784 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0408 19:35:48.786589 1049784 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0408 19:35:48.795047 1049784 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0408 19:35:48.795113 1049784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0408 19:35:48.826245 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0408 19:35:49.217136 1049784 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 19:35:49.217255 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:49.217275 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-160920 minikube.k8s.io/updated_at=2024_04_08T19_35_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021 minikube.k8s.io/name=embed-certs-160920 minikube.k8s.io/primary=true
	I0408 19:35:49.381377 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:49.381389 1049784 ops.go:34] apiserver oom_adj: -16
	I0408 19:35:49.881855 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:50.381513 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:50.417548 1040091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:35:50.432523 1040091 api_server.go:72] duration metric: took 5m57.486373343s to wait for apiserver process to appear ...
	I0408 19:35:50.432547 1040091 api_server.go:88] waiting for apiserver healthz status ...
	I0408 19:35:50.432581 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:35:50.432639 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:35:50.485645 1040091 cri.go:89] found id: "9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6"
	I0408 19:35:50.485666 1040091 cri.go:89] found id: "b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061"
	I0408 19:35:50.485671 1040091 cri.go:89] found id: ""
	I0408 19:35:50.485678 1040091 logs.go:276] 2 containers: [9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6 b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061]
	I0408 19:35:50.485734 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.489789 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.494693 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0408 19:35:50.494763 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:35:50.535558 1040091 cri.go:89] found id: "7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3"
	I0408 19:35:50.535578 1040091 cri.go:89] found id: "db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a"
	I0408 19:35:50.535583 1040091 cri.go:89] found id: ""
	I0408 19:35:50.535591 1040091 logs.go:276] 2 containers: [7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3 db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a]
	I0408 19:35:50.535649 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.540120 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.544033 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0408 19:35:50.544107 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:35:50.588243 1040091 cri.go:89] found id: "7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2"
	I0408 19:35:50.588265 1040091 cri.go:89] found id: "3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892"
	I0408 19:35:50.588270 1040091 cri.go:89] found id: ""
	I0408 19:35:50.588286 1040091 logs.go:276] 2 containers: [7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2 3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892]
	I0408 19:35:50.588351 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.592067 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.595759 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:35:50.595951 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:35:50.657884 1040091 cri.go:89] found id: "edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90"
	I0408 19:35:50.657909 1040091 cri.go:89] found id: "a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f"
	I0408 19:35:50.657915 1040091 cri.go:89] found id: ""
	I0408 19:35:50.657933 1040091 logs.go:276] 2 containers: [edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90 a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f]
	I0408 19:35:50.657990 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.663278 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.667093 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:35:50.667170 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:35:50.704272 1040091 cri.go:89] found id: "69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf"
	I0408 19:35:50.704296 1040091 cri.go:89] found id: "63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070"
	I0408 19:35:50.704301 1040091 cri.go:89] found id: ""
	I0408 19:35:50.704309 1040091 logs.go:276] 2 containers: [69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf 63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070]
	I0408 19:35:50.704384 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.708641 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.712265 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:35:50.712341 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:35:50.754783 1040091 cri.go:89] found id: "f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46"
	I0408 19:35:50.754802 1040091 cri.go:89] found id: "172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58"
	I0408 19:35:50.754806 1040091 cri.go:89] found id: ""
	I0408 19:35:50.754813 1040091 logs.go:276] 2 containers: [f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46 172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58]
	I0408 19:35:50.754884 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.759728 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.763582 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0408 19:35:50.763672 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:35:50.815649 1040091 cri.go:89] found id: "19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93"
	I0408 19:35:50.815671 1040091 cri.go:89] found id: "544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356"
	I0408 19:35:50.815676 1040091 cri.go:89] found id: ""
	I0408 19:35:50.815683 1040091 logs.go:276] 2 containers: [19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93 544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356]
	I0408 19:35:50.815789 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.819818 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.823422 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:35:50.823518 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:35:50.881534 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:51.381583 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:51.881747 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:52.382449 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:52.881712 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:53.381517 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:53.881479 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:54.382310 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:54.881509 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:55.382269 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:50.868462 1040091 cri.go:89] found id: "804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940"
	I0408 19:35:50.868531 1040091 cri.go:89] found id: ""
	I0408 19:35:50.868554 1040091 logs.go:276] 1 containers: [804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940]
	I0408 19:35:50.868636 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.872672 1040091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0408 19:35:50.872769 1040091 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 19:35:50.931700 1040091 cri.go:89] found id: "15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19"
	I0408 19:35:50.931728 1040091 cri.go:89] found id: "0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63"
	I0408 19:35:50.931733 1040091 cri.go:89] found id: ""
	I0408 19:35:50.931740 1040091 logs.go:276] 2 containers: [15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19 0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63]
	I0408 19:35:50.931852 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.936883 1040091 ssh_runner.go:195] Run: which crictl
	I0408 19:35:50.941169 1040091 logs.go:123] Gathering logs for container status ...
	I0408 19:35:50.941198 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:35:51.033741 1040091 logs.go:123] Gathering logs for etcd [7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3] ...
	I0408 19:35:51.033773 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3"
	I0408 19:35:51.083564 1040091 logs.go:123] Gathering logs for etcd [db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a] ...
	I0408 19:35:51.083594 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a"
	I0408 19:35:51.133070 1040091 logs.go:123] Gathering logs for coredns [7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2] ...
	I0408 19:35:51.133100 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2"
	I0408 19:35:51.188619 1040091 logs.go:123] Gathering logs for kube-controller-manager [172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58] ...
	I0408 19:35:51.188647 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58"
	I0408 19:35:51.267350 1040091 logs.go:123] Gathering logs for kindnet [19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93] ...
	I0408 19:35:51.267413 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93"
	I0408 19:35:51.345361 1040091 logs.go:123] Gathering logs for containerd ...
	I0408 19:35:51.345387 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0408 19:35:51.417368 1040091 logs.go:123] Gathering logs for dmesg ...
	I0408 19:35:51.417403 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:35:51.438377 1040091 logs.go:123] Gathering logs for coredns [3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892] ...
	I0408 19:35:51.438408 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892"
	I0408 19:35:51.485566 1040091 logs.go:123] Gathering logs for kubernetes-dashboard [804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940] ...
	I0408 19:35:51.485595 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940"
	I0408 19:35:51.535300 1040091 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:35:51.535328 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 19:35:51.685266 1040091 logs.go:123] Gathering logs for kube-apiserver [9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6] ...
	I0408 19:35:51.685299 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6"
	I0408 19:35:51.747971 1040091 logs.go:123] Gathering logs for kube-apiserver [b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061] ...
	I0408 19:35:51.748010 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061"
	I0408 19:35:51.816472 1040091 logs.go:123] Gathering logs for kube-scheduler [a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f] ...
	I0408 19:35:51.816521 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f"
	I0408 19:35:51.862579 1040091 logs.go:123] Gathering logs for kube-proxy [69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf] ...
	I0408 19:35:51.862611 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf"
	I0408 19:35:51.920721 1040091 logs.go:123] Gathering logs for kindnet [544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356] ...
	I0408 19:35:51.920750 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356"
	I0408 19:35:51.996944 1040091 logs.go:123] Gathering logs for kubelet ...
	I0408 19:35:51.997012 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 19:35:52.054348 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354687     663 reflector.go:138] object-"default"/"default-token-gzsv4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gzsv4" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.054620 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354764     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.054845 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.354859     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-zmg69": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-zmg69" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.055084 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.356924     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vcs78": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vcs78" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.055301 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.356979     663 reflector.go:138] object-"kube-system"/"coredns-token-w52sl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-w52sl" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.055543 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.357025     663 reflector.go:138] object-"kube-system"/"metrics-server-token-zxgqt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zxgqt" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.055828 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.357068     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.056051 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-540675 kubelet[663]: E0408 19:30:09.374468     663 reflector.go:138] object-"kube-system"/"kindnet-token-h6csz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h6csz" is forbidden: User "system:node:old-k8s-version-540675" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-540675' and this object
	W0408 19:35:52.063919 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:11 old-k8s-version-540675 kubelet[663]: E0408 19:30:11.776102     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:52.065482 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:12 old-k8s-version-540675 kubelet[663]: E0408 19:30:12.254992     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.068355 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:23 old-k8s-version-540675 kubelet[663]: E0408 19:30:23.971489     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:52.070485 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:34 old-k8s-version-540675 kubelet[663]: E0408 19:30:34.355720     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.070671 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:34 old-k8s-version-540675 kubelet[663]: E0408 19:30:34.977351     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.071001 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:35 old-k8s-version-540675 kubelet[663]: E0408 19:30:35.361567     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.071330 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:38 old-k8s-version-540675 kubelet[663]: E0408 19:30:38.654910     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.074126 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:49 old-k8s-version-540675 kubelet[663]: E0408 19:30:49.978868     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:52.075075 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:52 old-k8s-version-540675 kubelet[663]: E0408 19:30:52.422181     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.075409 1040091 logs.go:138] Found kubelet problem: Apr 08 19:30:58 old-k8s-version-540675 kubelet[663]: E0408 19:30:58.654477     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.075596 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:02 old-k8s-version-540675 kubelet[663]: E0408 19:31:02.963701     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.075924 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:09 old-k8s-version-540675 kubelet[663]: E0408 19:31:09.963237     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.076108 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:17 old-k8s-version-540675 kubelet[663]: E0408 19:31:17.964344     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.076700 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:21 old-k8s-version-540675 kubelet[663]: E0408 19:31:21.505624     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.077027 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:28 old-k8s-version-540675 kubelet[663]: E0408 19:31:28.654754     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.077211 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:29 old-k8s-version-540675 kubelet[663]: E0408 19:31:29.963597     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.077547 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:38 old-k8s-version-540675 kubelet[663]: E0408 19:31:38.963843     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.080029 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:40 old-k8s-version-540675 kubelet[663]: E0408 19:31:40.981110     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:52.080360 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:49 old-k8s-version-540675 kubelet[663]: E0408 19:31:49.963312     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.080544 1040091 logs.go:138] Found kubelet problem: Apr 08 19:31:53 old-k8s-version-540675 kubelet[663]: E0408 19:31:53.963765     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.081137 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:02 old-k8s-version-540675 kubelet[663]: E0408 19:32:02.592991     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.081326 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:07 old-k8s-version-540675 kubelet[663]: E0408 19:32:07.963579     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.081656 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:08 old-k8s-version-540675 kubelet[663]: E0408 19:32:08.654229     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.081856 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:18 old-k8s-version-540675 kubelet[663]: E0408 19:32:18.966254     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.082190 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:19 old-k8s-version-540675 kubelet[663]: E0408 19:32:19.963192     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.082377 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:29 old-k8s-version-540675 kubelet[663]: E0408 19:32:29.963578     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.082705 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:32 old-k8s-version-540675 kubelet[663]: E0408 19:32:32.963790     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.082891 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:42 old-k8s-version-540675 kubelet[663]: E0408 19:32:42.963650     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.083218 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:43 old-k8s-version-540675 kubelet[663]: E0408 19:32:43.963376     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.083403 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:53 old-k8s-version-540675 kubelet[663]: E0408 19:32:53.963583     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.083731 1040091 logs.go:138] Found kubelet problem: Apr 08 19:32:58 old-k8s-version-540675 kubelet[663]: E0408 19:32:58.964157     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.086182 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:04 old-k8s-version-540675 kubelet[663]: E0408 19:33:04.972212     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0408 19:35:52.086562 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:09 old-k8s-version-540675 kubelet[663]: E0408 19:33:09.963440     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.086750 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:17 old-k8s-version-540675 kubelet[663]: E0408 19:33:17.963569     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.087077 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:21 old-k8s-version-540675 kubelet[663]: E0408 19:33:21.963178     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.087263 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:29 old-k8s-version-540675 kubelet[663]: E0408 19:33:29.963894     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.087856 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:36 old-k8s-version-540675 kubelet[663]: E0408 19:33:36.797444     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.088191 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:38 old-k8s-version-540675 kubelet[663]: E0408 19:33:38.654611     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.088376 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:42 old-k8s-version-540675 kubelet[663]: E0408 19:33:42.965353     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.088705 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:52 old-k8s-version-540675 kubelet[663]: E0408 19:33:52.963651     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.088891 1040091 logs.go:138] Found kubelet problem: Apr 08 19:33:57 old-k8s-version-540675 kubelet[663]: E0408 19:33:57.963546     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.089220 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:05 old-k8s-version-540675 kubelet[663]: E0408 19:34:05.963339     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.089404 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:10 old-k8s-version-540675 kubelet[663]: E0408 19:34:10.966560     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.089732 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:16 old-k8s-version-540675 kubelet[663]: E0408 19:34:16.963733     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.089960 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:21 old-k8s-version-540675 kubelet[663]: E0408 19:34:21.963613     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.090315 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:30 old-k8s-version-540675 kubelet[663]: E0408 19:34:30.964250     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.090502 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:36 old-k8s-version-540675 kubelet[663]: E0408 19:34:36.963651     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.090837 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:44 old-k8s-version-540675 kubelet[663]: E0408 19:34:44.963713     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.091022 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:51 old-k8s-version-540675 kubelet[663]: E0408 19:34:51.963845     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.091355 1040091 logs.go:138] Found kubelet problem: Apr 08 19:34:55 old-k8s-version-540675 kubelet[663]: E0408 19:34:55.963188     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.091542 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:02 old-k8s-version-540675 kubelet[663]: E0408 19:35:02.969106     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.091869 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:10 old-k8s-version-540675 kubelet[663]: E0408 19:35:10.966162     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.092056 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:15 old-k8s-version-540675 kubelet[663]: E0408 19:35:15.963675     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.092384 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: E0408 19:35:23.963177     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.092568 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:29 old-k8s-version-540675 kubelet[663]: E0408 19:35:29.963846     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.092895 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: E0408 19:35:38.967622     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.093078 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:41 old-k8s-version-540675 kubelet[663]: E0408 19:35:41.963745     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.093406 1040091 logs.go:138] Found kubelet problem: Apr 08 19:35:51 old-k8s-version-540675 kubelet[663]: E0408 19:35:51.963484     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	I0408 19:35:52.093415 1040091 logs.go:123] Gathering logs for kube-scheduler [edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90] ...
	I0408 19:35:52.093430 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90"
	I0408 19:35:52.134609 1040091 logs.go:123] Gathering logs for kube-proxy [63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070] ...
	I0408 19:35:52.134636 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070"
	I0408 19:35:52.185233 1040091 logs.go:123] Gathering logs for kube-controller-manager [f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46] ...
	I0408 19:35:52.185259 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46"
	I0408 19:35:52.258111 1040091 logs.go:123] Gathering logs for storage-provisioner [15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19] ...
	I0408 19:35:52.258141 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19"
	I0408 19:35:52.304897 1040091 logs.go:123] Gathering logs for storage-provisioner [0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63] ...
	I0408 19:35:52.304924 1040091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63"
	I0408 19:35:52.359635 1040091 out.go:304] Setting ErrFile to fd 2...
	I0408 19:35:52.359661 1040091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 19:35:52.359740 1040091 out.go:239] X Problems detected in kubelet:
	W0408 19:35:52.359755 1040091 out.go:239]   Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: E0408 19:35:23.963177     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.359877 1040091 out.go:239]   Apr 08 19:35:29 old-k8s-version-540675 kubelet[663]: E0408 19:35:29.963846     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.359908 1040091 out.go:239]   Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: E0408 19:35:38.967622     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	W0408 19:35:52.359920 1040091 out.go:239]   Apr 08 19:35:41 old-k8s-version-540675 kubelet[663]: E0408 19:35:41.963745     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0408 19:35:52.359931 1040091 out.go:239]   Apr 08 19:35:51 old-k8s-version-540675 kubelet[663]: E0408 19:35:51.963484     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	I0408 19:35:52.359938 1040091 out.go:304] Setting ErrFile to fd 2...
	I0408 19:35:52.359945 1040091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:35:55.882125 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:56.381526 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:56.882349 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:57.381543 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:57.882454 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:58.382097 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:58.881923 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:59.382279 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:35:59.882105 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:36:00.382185 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:36:00.881899 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:36:01.382176 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:36:01.881497 1049784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 19:36:02.012079 1049784 kubeadm.go:1107] duration metric: took 12.794912435s to wait for elevateKubeSystemPrivileges
	W0408 19:36:02.012124 1049784 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 19:36:02.012133 1049784 kubeadm.go:393] duration metric: took 31.388221148s to StartCluster
	I0408 19:36:02.012151 1049784 settings.go:142] acquiring lock: {Name:mk5026d653ab6560d4c2e7a68e9bc77339a3813a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:36:02.012220 1049784 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 19:36:02.013845 1049784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-838483/kubeconfig: {Name:mk2667c6d217e28cc639f1cedf47734a14602005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:36:02.014227 1049784 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0408 19:36:02.017974 1049784 out.go:177] * Verifying Kubernetes components...
	I0408 19:36:02.014290 1049784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 19:36:02.014508 1049784 config.go:182] Loaded profile config "embed-certs-160920": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:36:02.014521 1049784 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 19:36:02.018261 1049784 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-160920"
	I0408 19:36:02.018291 1049784 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-160920"
	I0408 19:36:02.018325 1049784 host.go:66] Checking if "embed-certs-160920" exists ...
	I0408 19:36:02.018827 1049784 cli_runner.go:164] Run: docker container inspect embed-certs-160920 --format={{.State.Status}}
	I0408 19:36:02.018997 1049784 addons.go:69] Setting default-storageclass=true in profile "embed-certs-160920"
	I0408 19:36:02.019024 1049784 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-160920"
	I0408 19:36:02.019256 1049784 cli_runner.go:164] Run: docker container inspect embed-certs-160920 --format={{.State.Status}}
	I0408 19:36:02.023799 1049784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:36:02.052820 1049784 addons.go:234] Setting addon default-storageclass=true in "embed-certs-160920"
	I0408 19:36:02.052866 1049784 host.go:66] Checking if "embed-certs-160920" exists ...
	I0408 19:36:02.053301 1049784 cli_runner.go:164] Run: docker container inspect embed-certs-160920 --format={{.State.Status}}
	I0408 19:36:02.072711 1049784 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:36:02.361001 1040091 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0408 19:36:02.518753 1040091 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0408 19:36:02.521014 1040091 out.go:177] 
	W0408 19:36:02.523038 1040091 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0408 19:36:02.523081 1040091 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0408 19:36:02.523099 1040091 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0408 19:36:02.523105 1040091 out.go:239] * 
	W0408 19:36:02.524124 1040091 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 19:36:02.526444 1040091 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	60f4715cf22bf       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   8bb94d5e7fab2       dashboard-metrics-scraper-8d5bb5db8-zvfx4
	804611f5f91fa       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   9eba032d6f0bb       kubernetes-dashboard-cd95d586-pvtss
	19066ce37f4dc       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   dde96b79afb11       kindnet-9dmvc
	7da0e08edf872       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   a0c7e2829706b       coredns-74ff55c5b-8jdhp
	15ff307abc093       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         1                   5aeb0ad402dc6       storage-provisioner
	69372bf354a34       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   5339ee05ff484       kube-proxy-jsgdk
	eb78d7f3d089e       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   8710ce557cf06       busybox
	f4136ca918a05       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   9a4ac67289909       kube-controller-manager-old-k8s-version-540675
	7f43de202e2c7       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   cbd7a81eb3b3f       etcd-old-k8s-version-540675
	9782d344b50aa       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   3f25f94dbd50c       kube-apiserver-old-k8s-version-540675
	edd6064b9a04f       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   5d41a68484557       kube-scheduler-old-k8s-version-540675
	80cf00a3fa423       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   4506fe7ffc4bf       busybox
	3354a361f1dbd       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   b5ee1e5e151e8       coredns-74ff55c5b-8jdhp
	0775c1e2afbac       ba04bb24b9575       8 minutes ago       Exited              storage-provisioner         0                   5e18a22db2e9d       storage-provisioner
	63fb6a30645da       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   d3da179f70825       kube-proxy-jsgdk
	544ebb710a41b       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   eb0916916a05e       kindnet-9dmvc
	b47f92d3ca467       2c08bbbc02d3a       9 minutes ago       Exited              kube-apiserver              0                   c3b222ccb0f9c       kube-apiserver-old-k8s-version-540675
	a229a6c35f867       e7605f88f17d6       9 minutes ago       Exited              kube-scheduler              0                   d0f4501b2a99f       kube-scheduler-old-k8s-version-540675
	db75c18b448b7       05b738aa1bc63       9 minutes ago       Exited              etcd                        0                   e080f0c7ebe9a       etcd-old-k8s-version-540675
	172cca83cb365       1df8a2b116bd1       9 minutes ago       Exited              kube-controller-manager     0                   63b20652563e6       kube-controller-manager-old-k8s-version-540675
	
	
	==> containerd <==
	Apr 08 19:32:02 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:32:02.007209998Z" level=info msg="CreateContainer within sandbox \"8bb94d5e7fab2da0a6511336d7dae43245a35deebda9c30aca498d94712f2d0f\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"1733d76a53831287816e20e2e57eb8ca258d81a9e5ebf1e0605427b2cbb3d791\""
	Apr 08 19:32:02 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:32:02.009316972Z" level=info msg="StartContainer for \"1733d76a53831287816e20e2e57eb8ca258d81a9e5ebf1e0605427b2cbb3d791\""
	Apr 08 19:32:02 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:32:02.083681110Z" level=info msg="StartContainer for \"1733d76a53831287816e20e2e57eb8ca258d81a9e5ebf1e0605427b2cbb3d791\" returns successfully"
	Apr 08 19:32:02 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:32:02.111903637Z" level=info msg="shim disconnected" id=1733d76a53831287816e20e2e57eb8ca258d81a9e5ebf1e0605427b2cbb3d791
	Apr 08 19:32:02 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:32:02.111969966Z" level=warning msg="cleaning up after shim disconnected" id=1733d76a53831287816e20e2e57eb8ca258d81a9e5ebf1e0605427b2cbb3d791 namespace=k8s.io
	Apr 08 19:32:02 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:32:02.112037853Z" level=info msg="cleaning up dead shim"
	Apr 08 19:32:02 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:32:02.121191598Z" level=warning msg="cleanup warnings time=\"2024-04-08T19:32:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2907 runtime=io.containerd.runc.v2\n"
	Apr 08 19:32:02 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:32:02.595620649Z" level=info msg="RemoveContainer for \"7bae3de43125039d3c04ee93d1bafd41099a372c65a5fe60d3ec86948b6ceb02\""
	Apr 08 19:32:02 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:32:02.601573490Z" level=info msg="RemoveContainer for \"7bae3de43125039d3c04ee93d1bafd41099a372c65a5fe60d3ec86948b6ceb02\" returns successfully"
	Apr 08 19:33:04 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:04.964073823Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 08 19:33:04 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:04.970141845Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Apr 08 19:33:04 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:04.971772433Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 08 19:33:35 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:35.965253188Z" level=info msg="CreateContainer within sandbox \"8bb94d5e7fab2da0a6511336d7dae43245a35deebda9c30aca498d94712f2d0f\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Apr 08 19:33:35 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:35.980324452Z" level=info msg="CreateContainer within sandbox \"8bb94d5e7fab2da0a6511336d7dae43245a35deebda9c30aca498d94712f2d0f\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea\""
	Apr 08 19:33:35 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:35.980993137Z" level=info msg="StartContainer for \"60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea\""
	Apr 08 19:33:36 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:36.048159904Z" level=info msg="StartContainer for \"60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea\" returns successfully"
	Apr 08 19:33:36 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:36.074877682Z" level=info msg="shim disconnected" id=60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea
	Apr 08 19:33:36 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:36.074940400Z" level=warning msg="cleaning up after shim disconnected" id=60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea namespace=k8s.io
	Apr 08 19:33:36 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:36.074954463Z" level=info msg="cleaning up dead shim"
	Apr 08 19:33:36 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:36.082890639Z" level=warning msg="cleanup warnings time=\"2024-04-08T19:33:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3160 runtime=io.containerd.runc.v2\n"
	Apr 08 19:33:36 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:36.799065822Z" level=info msg="RemoveContainer for \"1733d76a53831287816e20e2e57eb8ca258d81a9e5ebf1e0605427b2cbb3d791\""
	Apr 08 19:33:36 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:33:36.804639662Z" level=info msg="RemoveContainer for \"1733d76a53831287816e20e2e57eb8ca258d81a9e5ebf1e0605427b2cbb3d791\" returns successfully"
	Apr 08 19:35:54 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:35:54.964497118Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 08 19:35:54 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:35:54.980824851Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Apr 08 19:35:54 old-k8s-version-540675 containerd[570]: time="2024-04-08T19:35:54.982748494Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> coredns [3354a361f1dbd1578fd2868af797789b6328d7dfdce311c72422c275d8076892] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:43645 - 10557 "HINFO IN 5421278197694816847.8196218050379655238. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019549713s
	
	
	==> coredns [7da0e08edf872581909e521c647dae9296483acdec4863d70095f59ce4f7c9a2] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:33248 - 14112 "HINFO IN 4003124401653843397.2500130840007994482. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029143931s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-540675
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-540675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021
	                    minikube.k8s.io/name=old-k8s-version-540675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T19_27_12_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 19:27:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-540675
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 19:36:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 19:31:09 +0000   Mon, 08 Apr 2024 19:27:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 19:31:09 +0000   Mon, 08 Apr 2024 19:27:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 19:31:09 +0000   Mon, 08 Apr 2024 19:27:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 19:31:09 +0000   Mon, 08 Apr 2024 19:27:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-540675
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 068192551743484a8ee20a898306d582
	  System UUID:                3e1060de-21a7-4bb8-a099-13273ac64b3f
	  Boot ID:                    b4b2abab-4517-475f-9e8e-63d816803507
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 coredns-74ff55c5b-8jdhp                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m37s
	  kube-system                 etcd-old-k8s-version-540675                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m44s
	  kube-system                 kindnet-9dmvc                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m37s
	  kube-system                 kube-apiserver-old-k8s-version-540675             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m44s
	  kube-system                 kube-controller-manager-old-k8s-version-540675    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m44s
	  kube-system                 kube-proxy-jsgdk                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-scheduler-old-k8s-version-540675             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m44s
	  kube-system                 metrics-server-9975d5f86-fvr87                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m31s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m35s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-zvfx4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-pvtss               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  9m4s (x5 over 9m4s)  kubelet     Node old-k8s-version-540675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m4s (x4 over 9m4s)  kubelet     Node old-k8s-version-540675 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m45s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m45s                kubelet     Node old-k8s-version-540675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m45s                kubelet     Node old-k8s-version-540675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m45s                kubelet     Node old-k8s-version-540675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m45s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m37s                kubelet     Node old-k8s-version-540675 status is now: NodeReady
	  Normal  Starting                 8m36s                kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m4s                 kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m4s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m4s)  kubelet     Node old-k8s-version-540675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m4s)  kubelet     Node old-k8s-version-540675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m4s)  kubelet     Node old-k8s-version-540675 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m53s                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001042] FS-Cache: O-key=[8] '36d4c90000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000931] FS-Cache: N-cookie d=000000002ce2b4d1{9p.inode} n=000000004e1ae318
	[  +0.001021] FS-Cache: N-key=[8] '36d4c90000000000'
	[  +0.003218] FS-Cache: Duplicate cookie detected
	[  +0.000732] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000935] FS-Cache: O-cookie d=000000002ce2b4d1{9p.inode} n=000000006d0c49e0
	[  +0.001134] FS-Cache: O-key=[8] '36d4c90000000000'
	[  +0.000710] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000934] FS-Cache: N-cookie d=000000002ce2b4d1{9p.inode} n=00000000c36a3df6
	[  +0.001046] FS-Cache: N-key=[8] '36d4c90000000000'
	[  +2.726305] FS-Cache: Duplicate cookie detected
	[  +0.000790] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001064] FS-Cache: O-cookie d=000000002ce2b4d1{9p.inode} n=0000000060bf15ac
	[  +0.001248] FS-Cache: O-key=[8] '35d4c90000000000'
	[  +0.000713] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=000000002ce2b4d1{9p.inode} n=0000000099083bd9
	[  +0.001194] FS-Cache: N-key=[8] '35d4c90000000000'
	[  +0.329667] FS-Cache: Duplicate cookie detected
	[  +0.000738] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000974] FS-Cache: O-cookie d=000000002ce2b4d1{9p.inode} n=0000000083bcb924
	[  +0.001047] FS-Cache: O-key=[8] '3bd4c90000000000'
	[  +0.000776] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000942] FS-Cache: N-cookie d=000000002ce2b4d1{9p.inode} n=00000000cd19031b
	[  +0.001036] FS-Cache: N-key=[8] '3bd4c90000000000'
	
	
	==> etcd [7f43de202e2c70d5438d1f6d9ad32d89f99ba537beb76894b754d7599085a3b3] <==
	2024-04-08 19:31:57.985689 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:32:07.985756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:32:17.985702 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:32:27.985614 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:32:37.985661 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:32:47.985868 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:32:57.985880 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:33:07.985789 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:33:17.985813 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:33:27.985697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:33:37.985615 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:33:47.985574 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:33:57.985649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:34:07.985726 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:34:17.985707 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:34:27.985745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:34:37.985685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:34:47.985624 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:34:57.985679 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:35:07.985772 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:35:17.985737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:35:27.985745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:35:37.986054 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:35:47.985619 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:35:57.985758 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [db75c18b448b75951c47438746a6350e37afb65a683f1b578e47cf215193e74a] <==
	raft2024/04/08 19:27:02 INFO: ea7e25599daad906 became leader at term 2
	raft2024/04/08 19:27:02 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-04-08 19:27:02.456858 I | etcdserver: published {Name:old-k8s-version-540675 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-04-08 19:27:02.456952 I | embed: ready to serve client requests
	2024-04-08 19:27:02.458449 I | embed: serving client requests on 192.168.76.2:2379
	2024-04-08 19:27:02.458647 I | etcdserver: setting up the initial cluster version to 3.4
	2024-04-08 19:27:02.458972 I | embed: ready to serve client requests
	2024-04-08 19:27:02.460467 I | embed: serving client requests on 127.0.0.1:2379
	2024-04-08 19:27:02.475436 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-04-08 19:27:02.477720 I | etcdserver/api: enabled capabilities for version 3.4
	2024-04-08 19:27:11.139238 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:27:22.269048 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:27:30.487884 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:27:40.488045 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:27:50.488199 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:28:00.488273 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:28:10.488161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:28:20.490423 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:28:30.488030 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:28:40.487965 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:28:50.488006 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:29:00.488247 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:29:10.490261 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:29:20.488208 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-08 19:29:30.488030 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 19:36:05 up  4:18,  0 users,  load average: 3.32, 2.10, 2.43
	Linux old-k8s-version-540675 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [19066ce37f4dc6f50dd3738d428c714d40d4c5f4267d031cdaf9938c7017cc93] <==
	I0408 19:34:04.444281       1 main.go:227] handling current node
	I0408 19:34:14.449935       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:34:14.450062       1 main.go:227] handling current node
	I0408 19:34:24.464987       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:34:24.465013       1 main.go:227] handling current node
	I0408 19:34:34.480278       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:34:34.480993       1 main.go:227] handling current node
	I0408 19:34:44.502566       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:34:44.502666       1 main.go:227] handling current node
	I0408 19:34:54.516165       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:34:54.516208       1 main.go:227] handling current node
	I0408 19:35:04.530114       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:35:04.530141       1 main.go:227] handling current node
	I0408 19:35:14.546421       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:35:14.546461       1 main.go:227] handling current node
	I0408 19:35:24.563148       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:35:24.563184       1 main.go:227] handling current node
	I0408 19:35:34.584466       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:35:34.584835       1 main.go:227] handling current node
	I0408 19:35:44.597406       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:35:44.597688       1 main.go:227] handling current node
	I0408 19:35:54.609230       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:35:54.609261       1 main.go:227] handling current node
	I0408 19:36:04.621586       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:36:04.621646       1 main.go:227] handling current node
	
	
	==> kindnet [544ebb710a41be25a2832159479391acf3c81c449ab52a550bbc1c468add7356] <==
	podIP = 192.168.76.2
	I0408 19:27:28.430212       1 main.go:116] setting mtu 1500 for CNI 
	I0408 19:27:28.430233       1 main.go:146] kindnetd IP family: "ipv4"
	I0408 19:27:28.430245       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0408 19:27:58.699161       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0408 19:27:58.723576       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:27:58.723615       1 main.go:227] handling current node
	I0408 19:28:08.744883       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:28:08.745091       1 main.go:227] handling current node
	I0408 19:28:18.765874       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:28:18.765902       1 main.go:227] handling current node
	I0408 19:28:28.770425       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:28:28.770513       1 main.go:227] handling current node
	I0408 19:28:38.792279       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:28:38.792305       1 main.go:227] handling current node
	I0408 19:28:48.815368       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:28:48.815398       1 main.go:227] handling current node
	I0408 19:28:58.829535       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:28:58.829568       1 main.go:227] handling current node
	I0408 19:29:08.853479       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:29:08.853574       1 main.go:227] handling current node
	I0408 19:29:18.874462       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:29:18.874488       1 main.go:227] handling current node
	I0408 19:29:28.902184       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0408 19:29:28.902385       1 main.go:227] handling current node
	
	
	==> kube-apiserver [9782d344b50aa4213d02e412ef50f0e09d43684e27add4c8001ae9c2784d14d6] <==
	I0408 19:32:40.497791       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0408 19:32:40.497822       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0408 19:33:11.891451       1 handler_proxy.go:102] no RequestInfo found in the context
	E0408 19:33:11.891645       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 19:33:11.891781       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0408 19:33:19.765401       1 client.go:360] parsed scheme: "passthrough"
	I0408 19:33:19.765607       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0408 19:33:19.765624       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0408 19:33:59.401020       1 client.go:360] parsed scheme: "passthrough"
	I0408 19:33:59.401065       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0408 19:33:59.401074       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0408 19:34:37.248497       1 client.go:360] parsed scheme: "passthrough"
	I0408 19:34:37.248544       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0408 19:34:37.248553       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0408 19:35:10.356826       1 handler_proxy.go:102] no RequestInfo found in the context
	E0408 19:35:10.357033       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 19:35:10.357050       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0408 19:35:14.611798       1 client.go:360] parsed scheme: "passthrough"
	I0408 19:35:14.612036       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0408 19:35:14.612119       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0408 19:35:47.484726       1 client.go:360] parsed scheme: "passthrough"
	I0408 19:35:47.484782       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0408 19:35:47.484924       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [b47f92d3ca46776a452b77a4e3003d95f22af24e73b8f90f6c6fac320f8f2061] <==
	I0408 19:27:09.347035       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0408 19:27:09.347909       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0408 19:27:09.809555       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 19:27:09.855492       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0408 19:27:09.953533       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0408 19:27:09.954885       1 controller.go:606] quota admission added evaluator for: endpoints
	I0408 19:27:09.959215       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0408 19:27:10.969984       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0408 19:27:11.320432       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0408 19:27:11.379260       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0408 19:27:19.785243       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 19:27:27.471201       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0408 19:27:27.536641       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0408 19:27:42.854086       1 client.go:360] parsed scheme: "passthrough"
	I0408 19:27:42.854324       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0408 19:27:42.854341       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0408 19:28:19.609627       1 client.go:360] parsed scheme: "passthrough"
	I0408 19:28:19.609673       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0408 19:28:19.609683       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0408 19:28:49.984100       1 client.go:360] parsed scheme: "passthrough"
	I0408 19:28:49.984154       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0408 19:28:49.984164       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0408 19:29:33.038746       1 client.go:360] parsed scheme: "passthrough"
	I0408 19:29:33.038791       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0408 19:29:33.038800       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [172cca83cb365712ebbfef367ab092086c8c039ff7dfa5d923a8555bece58f58] <==
	I0408 19:27:27.518783       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0408 19:27:27.519130       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0408 19:27:27.519709       1 event.go:291] "Event occurred" object="old-k8s-version-540675" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-540675 event: Registered Node old-k8s-version-540675 in Controller"
	I0408 19:27:27.561873       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9dmvc"
	I0408 19:27:27.570210       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jsgdk"
	I0408 19:27:27.579201       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0408 19:27:27.584779       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-540675" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 19:27:27.607862       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0408 19:27:27.607991       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0408 19:27:27.608001       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0408 19:27:27.608009       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0408 19:27:27.609234       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0408 19:27:27.667738       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-4bh7q"
	I0408 19:27:27.689480       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-8jdhp"
	I0408 19:27:27.721370       1 shared_informer.go:247] Caches are synced for resource quota 
	I0408 19:27:27.737862       1 shared_informer.go:247] Caches are synced for resource quota 
	I0408 19:27:27.878674       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0408 19:27:28.178856       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0408 19:27:28.210563       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0408 19:27:28.210585       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0408 19:27:29.230630       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0408 19:27:29.241173       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-4bh7q"
	I0408 19:27:32.519029       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0408 19:29:32.652224       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0408 19:29:32.827121       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [f4136ca918a056266b72f1ad3c99428b11b0a0f6298ac046a836af1a28a75b46] <==
	E0408 19:31:58.793591       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0408 19:32:05.367014       1 request.go:655] Throttling request took 1.048288483s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0408 19:32:06.218377       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0408 19:32:29.295413       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0408 19:32:37.868954       1 request.go:655] Throttling request took 1.048334417s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0408 19:32:38.720412       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0408 19:32:59.797202       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0408 19:33:10.371349       1 request.go:655] Throttling request took 1.048688108s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0408 19:33:11.222544       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0408 19:33:30.298988       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0408 19:33:42.873055       1 request.go:655] Throttling request took 1.048328727s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0408 19:33:43.724655       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0408 19:34:00.805901       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0408 19:34:15.375044       1 request.go:655] Throttling request took 1.048514505s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W0408 19:34:16.226504       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0408 19:34:31.307883       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0408 19:34:47.876959       1 request.go:655] Throttling request took 1.045453145s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0408 19:34:48.728511       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0408 19:35:01.810190       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0408 19:35:20.379034       1 request.go:655] Throttling request took 1.04716233s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0408 19:35:21.230586       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0408 19:35:32.312190       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0408 19:35:52.881428       1 request.go:655] Throttling request took 1.04822203s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0408 19:35:53.732920       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0408 19:36:02.818979       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [63fb6a30645dad8d43153426d4f20a732106030a64ea0d76de079ffabdc30070] <==
	I0408 19:27:28.826458       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0408 19:27:28.826645       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0408 19:27:28.861952       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0408 19:27:28.862202       1 server_others.go:185] Using iptables Proxier.
	I0408 19:27:28.862447       1 server.go:650] Version: v1.20.0
	I0408 19:27:28.867412       1 config.go:315] Starting service config controller
	I0408 19:27:28.867473       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0408 19:27:28.867631       1 config.go:224] Starting endpoint slice config controller
	I0408 19:27:28.867636       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0408 19:27:28.967873       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0408 19:27:28.967944       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [69372bf354a34b82352db843d7f5950b71afb22bf8e3a715837346e6bc7616cf] <==
	I0408 19:30:11.604037       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0408 19:30:11.604287       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0408 19:30:11.621535       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0408 19:30:11.621817       1 server_others.go:185] Using iptables Proxier.
	I0408 19:30:11.622281       1 server.go:650] Version: v1.20.0
	I0408 19:30:11.622976       1 config.go:315] Starting service config controller
	I0408 19:30:11.622991       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0408 19:30:11.623015       1 config.go:224] Starting endpoint slice config controller
	I0408 19:30:11.623019       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0408 19:30:11.723101       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0408 19:30:11.723176       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [a229a6c35f86714126b50c56b23dec2cb29d5e4104d5e0d63e8f8e393516a84f] <==
	I0408 19:27:03.363264       1 serving.go:331] Generated self-signed cert in-memory
	W0408 19:27:08.461563       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0408 19:27:08.461594       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 19:27:08.461606       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0408 19:27:08.461611       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0408 19:27:08.534506       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0408 19:27:08.538885       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 19:27:08.538919       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 19:27:08.543384       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0408 19:27:08.563629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 19:27:08.563741       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 19:27:08.566412       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 19:27:08.566495       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 19:27:08.566983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 19:27:08.567076       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 19:27:08.590477       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 19:27:08.590593       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 19:27:08.590696       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 19:27:08.590782       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 19:27:08.590866       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 19:27:08.591269       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 19:27:09.425341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 19:27:09.619061       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0408 19:27:09.939000       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [edd6064b9a04f310af7d14143dc439d015b5797201e5e20f45811626d4586f90] <==
	I0408 19:30:03.754559       1 serving.go:331] Generated self-signed cert in-memory
	W0408 19:30:09.185478       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0408 19:30:09.185510       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 19:30:09.185518       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0408 19:30:09.185522       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0408 19:30:09.469041       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0408 19:30:09.478217       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 19:30:09.478242       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 19:30:09.478261       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0408 19:30:09.586224       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 08 19:34:30 old-k8s-version-540675 kubelet[663]: E0408 19:34:30.964250     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	Apr 08 19:34:36 old-k8s-version-540675 kubelet[663]: E0408 19:34:36.963651     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 08 19:34:44 old-k8s-version-540675 kubelet[663]: I0408 19:34:44.963295     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea
	Apr 08 19:34:44 old-k8s-version-540675 kubelet[663]: E0408 19:34:44.963713     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	Apr 08 19:34:51 old-k8s-version-540675 kubelet[663]: E0408 19:34:51.963845     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 08 19:34:55 old-k8s-version-540675 kubelet[663]: I0408 19:34:55.962848     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea
	Apr 08 19:34:55 old-k8s-version-540675 kubelet[663]: E0408 19:34:55.963188     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	Apr 08 19:35:02 old-k8s-version-540675 kubelet[663]: E0408 19:35:02.969106     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 08 19:35:10 old-k8s-version-540675 kubelet[663]: I0408 19:35:10.965345     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea
	Apr 08 19:35:10 old-k8s-version-540675 kubelet[663]: E0408 19:35:10.966162     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	Apr 08 19:35:15 old-k8s-version-540675 kubelet[663]: E0408 19:35:15.963675     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: I0408 19:35:23.962832     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea
	Apr 08 19:35:23 old-k8s-version-540675 kubelet[663]: E0408 19:35:23.963177     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	Apr 08 19:35:29 old-k8s-version-540675 kubelet[663]: E0408 19:35:29.963846     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: I0408 19:35:38.966868     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea
	Apr 08 19:35:38 old-k8s-version-540675 kubelet[663]: E0408 19:35:38.967622     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	Apr 08 19:35:41 old-k8s-version-540675 kubelet[663]: E0408 19:35:41.963745     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 08 19:35:51 old-k8s-version-540675 kubelet[663]: I0408 19:35:51.963040     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea
	Apr 08 19:35:51 old-k8s-version-540675 kubelet[663]: E0408 19:35:51.963484     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	Apr 08 19:35:54 old-k8s-version-540675 kubelet[663]: E0408 19:35:54.983198     663 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Apr 08 19:35:54 old-k8s-version-540675 kubelet[663]: E0408 19:35:54.983257     663 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Apr 08 19:35:54 old-k8s-version-540675 kubelet[663]: E0408 19:35:54.983423     663 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-zxgqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-fvr87_kube-system(b90394f
c-5cbe-467a-8c63-42a1b74f2e3d): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Apr 08 19:35:54 old-k8s-version-540675 kubelet[663]: E0408 19:35:54.983465     663 pod_workers.go:191] Error syncing pod b90394fc-5cbe-467a-8c63-42a1b74f2e3d ("metrics-server-9975d5f86-fvr87_kube-system(b90394fc-5cbe-467a-8c63-42a1b74f2e3d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 08 19:36:03 old-k8s-version-540675 kubelet[663]: I0408 19:36:03.962824     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 60f4715cf22bfd271f8ff5de85b1aa7e29b28a904c0d677c0415de739aa5a3ea
	Apr 08 19:36:03 old-k8s-version-540675 kubelet[663]: E0408 19:36:03.963145     663 pod_workers.go:191] Error syncing pod cbe8eeaa-311b-421b-91d2-4ec309fa0bb2 ("dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zvfx4_kubernetes-dashboard(cbe8eeaa-311b-421b-91d2-4ec309fa0bb2)"
	
	
	==> kubernetes-dashboard [804611f5f91fa76c1e8930aebfbe7bf8981efe1e0f6767500c6214686b7ca940] <==
	2024/04/08 19:30:36 Using namespace: kubernetes-dashboard
	2024/04/08 19:30:36 Using in-cluster config to connect to apiserver
	2024/04/08 19:30:36 Using secret token for csrf signing
	2024/04/08 19:30:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/04/08 19:30:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/04/08 19:30:36 Successful initial request to the apiserver, version: v1.20.0
	2024/04/08 19:30:36 Generating JWE encryption key
	2024/04/08 19:30:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/04/08 19:30:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/04/08 19:30:37 Initializing JWE encryption key from synchronized object
	2024/04/08 19:30:37 Creating in-cluster Sidecar client
	2024/04/08 19:30:37 Serving insecurely on HTTP port: 9090
	2024/04/08 19:30:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/08 19:31:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/08 19:31:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/08 19:32:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/08 19:32:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/08 19:33:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/08 19:33:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/08 19:34:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/08 19:34:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/08 19:35:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/08 19:35:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/08 19:30:36 Starting overwatch
	
	
	==> storage-provisioner [0775c1e2afbac6a2dea9c9fe0e12133e3a0ce1dc3aa24983a4d3e96c68ff1d63] <==
	I0408 19:27:29.674914       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 19:27:29.689836       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 19:27:29.689925       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 19:27:29.702240       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 19:27:29.702343       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5e4cb89-7e46-4c29-a950-7bc325e53ed8", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-540675_77a8cc67-16c6-48ec-a184-ebc01d02b60c became leader
	I0408 19:27:29.702907       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-540675_77a8cc67-16c6-48ec-a184-ebc01d02b60c!
	I0408 19:27:29.805565       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-540675_77a8cc67-16c6-48ec-a184-ebc01d02b60c!
	
	
	==> storage-provisioner [15ff307abc093c1b41a4f53f5c04e87afe3516b0888d759794a3061b14a77c19] <==
	I0408 19:30:12.111993       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 19:30:12.126296       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 19:30:12.126348       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 19:30:29.594464       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 19:30:29.600386       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-540675_32b5614c-fb19-4cb4-aed9-791e4234a35f!
	I0408 19:30:29.603289       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5e4cb89-7e46-4c29-a950-7bc325e53ed8", APIVersion:"v1", ResourceVersion:"793", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-540675_32b5614c-fb19-4cb4-aed9-791e4234a35f became leader
	I0408 19:30:29.703310       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-540675_32b5614c-fb19-4cb4-aed9-791e4234a35f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-540675 -n old-k8s-version-540675
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-540675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-fvr87
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-540675 describe pod metrics-server-9975d5f86-fvr87
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-540675 describe pod metrics-server-9975d5f86-fvr87: exit status 1 (91.216478ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-fvr87" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-540675 describe pod metrics-server-9975d5f86-fvr87: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.70s)

                                                
                                    

Test pass (296/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.4
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.29.3/json-events 7.45
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.19
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.30.0-rc.1/json-events 7.38
22 TestDownloadOnly/v1.30.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.1/LogsDuration 0.18
27 TestDownloadOnly/v1.30.0-rc.1/DeleteAll 0.38
28 TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds 0.24
30 TestBinaryMirror 0.56
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 142.88
38 TestAddons/parallel/Registry 14.29
40 TestAddons/parallel/InspektorGadget 11.83
41 TestAddons/parallel/MetricsServer 6.9
44 TestAddons/parallel/CSI 87.92
45 TestAddons/parallel/Headlamp 10.97
46 TestAddons/parallel/CloudSpanner 5.59
47 TestAddons/parallel/LocalPath 52.34
48 TestAddons/parallel/NvidiaDevicePlugin 5.54
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 12.27
54 TestCertOptions 37.31
55 TestCertExpiration 231.31
57 TestForceSystemdFlag 42.54
58 TestForceSystemdEnv 39.53
59 TestDockerEnvContainerd 46.98
64 TestErrorSpam/setup 30.25
65 TestErrorSpam/start 0.73
66 TestErrorSpam/status 1.01
67 TestErrorSpam/pause 2.01
68 TestErrorSpam/unpause 1.84
69 TestErrorSpam/stop 1.5
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 84.35
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.57
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.92
81 TestFunctional/serial/CacheCmd/cache/add_local 1.46
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.39
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.03
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.14
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
89 TestFunctional/serial/ExtraConfig 55.22
90 TestFunctional/serial/ComponentHealth 0.09
91 TestFunctional/serial/LogsCmd 1.7
92 TestFunctional/serial/LogsFileCmd 1.84
93 TestFunctional/serial/InvalidService 4.75
95 TestFunctional/parallel/ConfigCmd 0.56
96 TestFunctional/parallel/DashboardCmd 9.83
97 TestFunctional/parallel/DryRun 0.47
98 TestFunctional/parallel/InternationalLanguage 0.24
99 TestFunctional/parallel/StatusCmd 1.2
103 TestFunctional/parallel/ServiceCmdConnect 10.64
104 TestFunctional/parallel/AddonsCmd 0.17
105 TestFunctional/parallel/PersistentVolumeClaim 24.22
107 TestFunctional/parallel/SSHCmd 0.72
108 TestFunctional/parallel/CpCmd 1.98
110 TestFunctional/parallel/FileSync 0.33
111 TestFunctional/parallel/CertSync 2.13
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
119 TestFunctional/parallel/License 0.34
120 TestFunctional/parallel/Version/short 0.08
121 TestFunctional/parallel/Version/components 1.23
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
126 TestFunctional/parallel/ImageCommands/ImageBuild 3.23
127 TestFunctional/parallel/ImageCommands/Setup 1.77
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
133 TestFunctional/parallel/ProfileCmd/profile_list 0.51
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.53
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
152 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
153 TestFunctional/parallel/ServiceCmd/List 0.51
154 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
155 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
156 TestFunctional/parallel/ServiceCmd/Format 0.38
157 TestFunctional/parallel/ServiceCmd/URL 0.42
158 TestFunctional/parallel/MountCmd/any-port 7.34
159 TestFunctional/parallel/MountCmd/specific-port 2.1
160 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
161 TestFunctional/delete_addon-resizer_images 0.07
162 TestFunctional/delete_my-image_image 0.01
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestMultiControlPlane/serial/StartCluster 137.13
168 TestMultiControlPlane/serial/DeployApp 18.51
169 TestMultiControlPlane/serial/PingHostFromPods 1.68
170 TestMultiControlPlane/serial/AddWorkerNode 24.39
171 TestMultiControlPlane/serial/NodeLabels 0.11
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.78
173 TestMultiControlPlane/serial/CopyFile 19.61
174 TestMultiControlPlane/serial/StopSecondaryNode 12.86
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.59
176 TestMultiControlPlane/serial/RestartSecondaryNode 19.03
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 141.61
179 TestMultiControlPlane/serial/DeleteSecondaryNode 11.29
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
181 TestMultiControlPlane/serial/StopCluster 36.08
182 TestMultiControlPlane/serial/RestartCluster 79.71
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
184 TestMultiControlPlane/serial/AddSecondaryNode 41.48
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
189 TestJSONOutput/start/Command 59.49
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.72
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.72
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.82
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.22
214 TestKicCustomNetwork/create_custom_network 39.62
215 TestKicCustomNetwork/use_default_bridge_network 37.32
216 TestKicExistingNetwork 34.2
217 TestKicCustomSubnet 32.24
218 TestKicStaticIP 31.62
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 64.52
223 TestMountStart/serial/StartWithMountFirst 5.85
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 8.33
226 TestMountStart/serial/VerifyMountSecond 0.27
227 TestMountStart/serial/DeleteFirst 1.58
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.22
230 TestMountStart/serial/RestartStopped 7.43
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 75.29
235 TestMultiNode/serial/DeployApp2Nodes 5.36
236 TestMultiNode/serial/PingHostFrom2Pods 1.08
237 TestMultiNode/serial/AddNode 19.31
238 TestMultiNode/serial/MultiNodeLabels 0.1
239 TestMultiNode/serial/ProfileList 0.35
240 TestMultiNode/serial/CopyFile 10.3
241 TestMultiNode/serial/StopNode 2.26
242 TestMultiNode/serial/StartAfterStop 9.18
243 TestMultiNode/serial/RestartKeepsNodes 86.74
244 TestMultiNode/serial/DeleteNode 5.44
245 TestMultiNode/serial/StopMultiNode 23.97
246 TestMultiNode/serial/RestartMultiNode 49.15
247 TestMultiNode/serial/ValidateNameConflict 31.25
252 TestPreload 114.32
257 TestInsufficientStorage 12.99
258 TestRunningBinaryUpgrade 86.4
260 TestKubernetesUpgrade 379.42
261 TestMissingContainerUpgrade 173.82
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 41.52
265 TestNoKubernetes/serial/StartWithStopK8s 17.09
266 TestNoKubernetes/serial/Start 6.13
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
268 TestNoKubernetes/serial/ProfileList 1.05
269 TestNoKubernetes/serial/Stop 1.24
270 TestNoKubernetes/serial/StartNoArgs 7.83
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
272 TestStoppedBinaryUpgrade/Setup 1.16
273 TestStoppedBinaryUpgrade/Upgrade 113.12
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.37
283 TestPause/serial/Start 91.65
284 TestPause/serial/SecondStartNoReconfiguration 7.2
285 TestPause/serial/Pause 1.01
286 TestPause/serial/VerifyStatus 0.42
287 TestPause/serial/Unpause 0.87
288 TestPause/serial/PauseAgain 1.02
289 TestPause/serial/DeletePaused 2.77
290 TestPause/serial/VerifyDeletedResources 0.38
298 TestNetworkPlugins/group/false 5.21
303 TestStartStop/group/old-k8s-version/serial/FirstStart 173.77
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 61.27
306 TestStartStop/group/old-k8s-version/serial/DeployApp 9.6
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.37
308 TestStartStop/group/old-k8s-version/serial/Stop 12.39
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.21
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.15
316 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
318 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
319 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.16
321 TestStartStop/group/embed-certs/serial/FirstStart 88.94
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
325 TestStartStop/group/old-k8s-version/serial/Pause 3
327 TestStartStop/group/no-preload/serial/FirstStart 65.67
328 TestStartStop/group/embed-certs/serial/DeployApp 9.53
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.6
330 TestStartStop/group/embed-certs/serial/Stop 12.51
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
332 TestStartStop/group/embed-certs/serial/SecondStart 290.84
333 TestStartStop/group/no-preload/serial/DeployApp 8.45
334 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.4
335 TestStartStop/group/no-preload/serial/Stop 12.29
336 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
337 TestStartStop/group/no-preload/serial/SecondStart 296.5
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
341 TestStartStop/group/embed-certs/serial/Pause 3.6
343 TestStartStop/group/newest-cni/serial/FirstStart 46.09
344 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
345 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
346 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
347 TestStartStop/group/no-preload/serial/Pause 4.46
348 TestStartStop/group/newest-cni/serial/DeployApp 0
349 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.42
350 TestStartStop/group/newest-cni/serial/Stop 1.45
351 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
352 TestStartStop/group/newest-cni/serial/SecondStart 20.71
353 TestNetworkPlugins/group/auto/Start 93.05
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
357 TestStartStop/group/newest-cni/serial/Pause 3.87
358 TestNetworkPlugins/group/kindnet/Start 95.62
359 TestNetworkPlugins/group/auto/KubeletFlags 0.31
360 TestNetworkPlugins/group/auto/NetCatPod 8.3
361 TestNetworkPlugins/group/auto/DNS 0.18
362 TestNetworkPlugins/group/auto/Localhost 0.18
363 TestNetworkPlugins/group/auto/HairPin 0.16
364 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/Start 81.1
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
367 TestNetworkPlugins/group/kindnet/NetCatPod 10.37
368 TestNetworkPlugins/group/kindnet/DNS 0.25
369 TestNetworkPlugins/group/kindnet/Localhost 0.21
370 TestNetworkPlugins/group/kindnet/HairPin 0.21
371 TestNetworkPlugins/group/custom-flannel/Start 62.11
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.41
374 TestNetworkPlugins/group/calico/NetCatPod 11.33
375 TestNetworkPlugins/group/calico/DNS 0.21
376 TestNetworkPlugins/group/calico/Localhost 0.18
377 TestNetworkPlugins/group/calico/HairPin 0.16
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.33
380 TestNetworkPlugins/group/custom-flannel/DNS 0.29
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.28
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
383 TestNetworkPlugins/group/enable-default-cni/Start 93.94
384 TestNetworkPlugins/group/flannel/Start 64.14
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
387 TestNetworkPlugins/group/flannel/NetCatPod 10.3
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
390 TestNetworkPlugins/group/flannel/DNS 0.19
391 TestNetworkPlugins/group/flannel/Localhost 0.17
392 TestNetworkPlugins/group/flannel/HairPin 0.19
393 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
394 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
395 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
396 TestNetworkPlugins/group/bridge/Start 85.31
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
398 TestNetworkPlugins/group/bridge/NetCatPod 9.26
399 TestNetworkPlugins/group/bridge/DNS 0.23
400 TestNetworkPlugins/group/bridge/Localhost 0.15
401 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (8.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-871512 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-871512 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.398129794s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-871512
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-871512: exit status 85 (73.294879ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-871512 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |          |
	|         | -p download-only-871512        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 18:43:00
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 18:43:00.678219  843905 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:43:00.678368  843905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:43:00.678379  843905 out.go:304] Setting ErrFile to fd 2...
	I0408 18:43:00.678385  843905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:43:00.678623  843905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	W0408 18:43:00.678761  843905 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18585-838483/.minikube/config/config.json: open /home/jenkins/minikube-integration/18585-838483/.minikube/config/config.json: no such file or directory
	I0408 18:43:00.679213  843905 out.go:298] Setting JSON to true
	I0408 18:43:00.680241  843905 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12325,"bootTime":1712589456,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0408 18:43:00.680331  843905 start.go:139] virtualization:  
	I0408 18:43:00.683597  843905 out.go:97] [download-only-871512] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0408 18:43:00.685556  843905 out.go:169] MINIKUBE_LOCATION=18585
	W0408 18:43:00.683794  843905 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball: no such file or directory
	I0408 18:43:00.683838  843905 notify.go:220] Checking for updates...
	I0408 18:43:00.689167  843905 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:43:00.690706  843905 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 18:43:00.692796  843905 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	I0408 18:43:00.694983  843905 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0408 18:43:00.699201  843905 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 18:43:00.699571  843905 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 18:43:00.717943  843905 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0408 18:43:00.718067  843905 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 18:43:00.782659  843905 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-08 18:43:00.772768195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 18:43:00.782768  843905 docker.go:295] overlay module found
	I0408 18:43:00.784927  843905 out.go:97] Using the docker driver based on user configuration
	I0408 18:43:00.784953  843905 start.go:297] selected driver: docker
	I0408 18:43:00.784971  843905 start.go:901] validating driver "docker" against <nil>
	I0408 18:43:00.785072  843905 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 18:43:00.847170  843905 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-08 18:43:00.837975943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 18:43:00.847339  843905 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 18:43:00.847601  843905 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0408 18:43:00.847800  843905 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 18:43:00.850069  843905 out.go:169] Using Docker driver with root privileges
	I0408 18:43:00.851915  843905 cni.go:84] Creating CNI manager for ""
	I0408 18:43:00.851933  843905 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 18:43:00.851943  843905 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 18:43:00.852034  843905 start.go:340] cluster config:
	{Name:download-only-871512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-871512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:43:00.854217  843905 out.go:97] Starting "download-only-871512" primary control-plane node in "download-only-871512" cluster
	I0408 18:43:00.854240  843905 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0408 18:43:00.856468  843905 out.go:97] Pulling base image v0.0.43-1712593525-18585 ...
	I0408 18:43:00.856505  843905 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0408 18:43:00.856571  843905 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local docker daemon
	I0408 18:43:00.870986  843905 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd to local cache
	I0408 18:43:00.871676  843905 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local cache directory
	I0408 18:43:00.871801  843905 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd to local cache
	I0408 18:43:00.982710  843905 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0408 18:43:00.982760  843905 cache.go:56] Caching tarball of preloaded images
	I0408 18:43:00.982927  843905 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0408 18:43:00.985750  843905 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0408 18:43:00.985786  843905 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0408 18:43:01.099494  843905 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-871512 host does not exist
	  To start a cluster, run: "minikube start -p download-only-871512"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-871512
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (7.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-938784 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-938784 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.448904611s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (7.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-938784
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-938784: exit status 85 (75.539854ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-871512 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |                     |
	|         | -p download-only-871512        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| delete  | -p download-only-871512        | download-only-871512 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| start   | -o=json --download-only        | download-only-938784 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |                     |
	|         | -p download-only-938784        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 18:43:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 18:43:09.476858  844071 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:43:09.477031  844071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:43:09.477059  844071 out.go:304] Setting ErrFile to fd 2...
	I0408 18:43:09.477079  844071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:43:09.477358  844071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 18:43:09.477793  844071 out.go:298] Setting JSON to true
	I0408 18:43:09.478864  844071 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12334,"bootTime":1712589456,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0408 18:43:09.478975  844071 start.go:139] virtualization:  
	I0408 18:43:09.481549  844071 out.go:97] [download-only-938784] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0408 18:43:09.483799  844071 out.go:169] MINIKUBE_LOCATION=18585
	I0408 18:43:09.481766  844071 notify.go:220] Checking for updates...
	I0408 18:43:09.487165  844071 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:43:09.488962  844071 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 18:43:09.490772  844071 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	I0408 18:43:09.492572  844071 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0408 18:43:09.495893  844071 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 18:43:09.496173  844071 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 18:43:09.514649  844071 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0408 18:43:09.514751  844071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 18:43:09.576422  844071 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-08 18:43:09.567246774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 18:43:09.576526  844071 docker.go:295] overlay module found
	I0408 18:43:09.578399  844071 out.go:97] Using the docker driver based on user configuration
	I0408 18:43:09.578424  844071 start.go:297] selected driver: docker
	I0408 18:43:09.578431  844071 start.go:901] validating driver "docker" against <nil>
	I0408 18:43:09.578541  844071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 18:43:09.629628  844071 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-08 18:43:09.620462349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 18:43:09.629791  844071 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 18:43:09.630104  844071 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0408 18:43:09.630266  844071 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 18:43:09.632433  844071 out.go:169] Using Docker driver with root privileges
	I0408 18:43:09.634776  844071 cni.go:84] Creating CNI manager for ""
	I0408 18:43:09.634811  844071 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 18:43:09.634826  844071 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 18:43:09.634914  844071 start.go:340] cluster config:
	{Name:download-only-938784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-938784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:43:09.636870  844071 out.go:97] Starting "download-only-938784" primary control-plane node in "download-only-938784" cluster
	I0408 18:43:09.636897  844071 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0408 18:43:09.638681  844071 out.go:97] Pulling base image v0.0.43-1712593525-18585 ...
	I0408 18:43:09.638715  844071 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 18:43:09.638820  844071 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local docker daemon
	I0408 18:43:09.652934  844071 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd to local cache
	I0408 18:43:09.653070  844071 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local cache directory
	I0408 18:43:09.653094  844071 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local cache directory, skipping pull
	I0408 18:43:09.653102  844071 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd exists in cache, skipping pull
	I0408 18:43:09.653110  844071 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd as a tarball
	I0408 18:43:09.700745  844071 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0408 18:43:09.700769  844071 cache.go:56] Caching tarball of preloaded images
	I0408 18:43:09.700938  844071 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 18:43:09.703083  844071 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0408 18:43:09.703108  844071 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 ...
	I0408 18:43:09.821373  844071 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:663a9a795decbfebeb48b89f3f24d179 -> /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-938784 host does not exist
	  To start a cluster, run: "minikube start -p download-only-938784"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-938784
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/json-events (7.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-674856 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-674856 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.382673562s)
--- PASS: TestDownloadOnly/v1.30.0-rc.1/json-events (7.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/LogsDuration (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-674856
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-674856: exit status 85 (175.2656ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-871512 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |                     |
	|         | -p download-only-871512           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| delete  | -p download-only-871512           | download-only-871512 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| start   | -o=json --download-only           | download-only-938784 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |                     |
	|         | -p download-only-938784           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| delete  | -p download-only-938784           | download-only-938784 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC | 08 Apr 24 18:43 UTC |
	| start   | -o=json --download-only           | download-only-674856 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:43 UTC |                     |
	|         | -p download-only-674856           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 18:43:17
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 18:43:17.322919  844240 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:43:17.323098  844240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:43:17.323127  844240 out.go:304] Setting ErrFile to fd 2...
	I0408 18:43:17.323152  844240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:43:17.323419  844240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 18:43:17.323848  844240 out.go:298] Setting JSON to true
	I0408 18:43:17.324886  844240 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12342,"bootTime":1712589456,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0408 18:43:17.324983  844240 start.go:139] virtualization:  
	I0408 18:43:17.327454  844240 out.go:97] [download-only-674856] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0408 18:43:17.329936  844240 out.go:169] MINIKUBE_LOCATION=18585
	I0408 18:43:17.327657  844240 notify.go:220] Checking for updates...
	I0408 18:43:17.333928  844240 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:43:17.335574  844240 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 18:43:17.337435  844240 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	I0408 18:43:17.339128  844240 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0408 18:43:17.343032  844240 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 18:43:17.343313  844240 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 18:43:17.362990  844240 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0408 18:43:17.363093  844240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 18:43:17.422466  844240 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-08 18:43:17.411692191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 18:43:17.422592  844240 docker.go:295] overlay module found
	I0408 18:43:17.424690  844240 out.go:97] Using the docker driver based on user configuration
	I0408 18:43:17.424721  844240 start.go:297] selected driver: docker
	I0408 18:43:17.424729  844240 start.go:901] validating driver "docker" against <nil>
	I0408 18:43:17.424841  844240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 18:43:17.478160  844240 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-08 18:43:17.4697534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 18:43:17.478363  844240 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 18:43:17.478648  844240 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0408 18:43:17.478851  844240 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 18:43:17.480950  844240 out.go:169] Using Docker driver with root privileges
	I0408 18:43:17.482611  844240 cni.go:84] Creating CNI manager for ""
	I0408 18:43:17.482630  844240 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0408 18:43:17.482641  844240 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 18:43:17.482734  844240 start.go:340] cluster config:
	{Name:download-only-674856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.1 ClusterName:download-only-674856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0408 18:43:17.484733  844240 out.go:97] Starting "download-only-674856" primary control-plane node in "download-only-674856" cluster
	I0408 18:43:17.484751  844240 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0408 18:43:17.486392  844240 out.go:97] Pulling base image v0.0.43-1712593525-18585 ...
	I0408 18:43:17.486416  844240 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime containerd
	I0408 18:43:17.486586  844240 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local docker daemon
	I0408 18:43:17.500361  844240 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd to local cache
	I0408 18:43:17.500475  844240 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local cache directory
	I0408 18:43:17.500498  844240 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd in local cache directory, skipping pull
	I0408 18:43:17.500506  844240 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd exists in cache, skipping pull
	I0408 18:43:17.500514  844240 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd as a tarball
	I0408 18:43:17.564406  844240 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.1/preloaded-images-k8s-v18-v1.30.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I0408 18:43:17.564428  844240 cache.go:56] Caching tarball of preloaded images
	I0408 18:43:17.564599  844240 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime containerd
	I0408 18:43:17.566638  844240 out.go:97] Downloading Kubernetes v1.30.0-rc.1 preload ...
	I0408 18:43:17.566657  844240 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.1-containerd-overlay2-arm64.tar.lz4 ...
	I0408 18:43:17.691846  844240 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.1/preloaded-images-k8s-v18-v1.30.0-rc.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:fd382ad131fdc680b1986dbaa92e0321 -> /home/jenkins/minikube-integration/18585-838483/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-674856 host does not exist
	  To start a cluster, run: "minikube start -p download-only-674856"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.1/LogsDuration (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.1/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-674856
--- PASS: TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-809558 --alsologtostderr --binary-mirror http://127.0.0.1:35483 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-809558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-809558
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-038955
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-038955: exit status 85 (81.627549ms)

                                                
                                                
-- stdout --
	* Profile "addons-038955" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-038955"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-038955
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-038955: exit status 85 (90.77791ms)

                                                
                                                
-- stdout --
	* Profile "addons-038955" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-038955"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (142.88s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-038955 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-038955 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m22.883843252s)
--- PASS: TestAddons/Setup (142.88s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 49.675023ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-cdb5h" [f28c85f3-d85e-49b0-a5ac-ca0f5b10cbaf] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004866545s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v6nbd" [620468ea-2130-4402-bd89-371f755f849b] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009508886s
addons_test.go:340: (dbg) Run:  kubectl --context addons-038955 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-038955 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-038955 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.109994934s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 ip
2024/04/08 18:46:03 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.29s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4gpv2" [c5ec3434-d494-41f4-9ecb-19e463055f00] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003774528s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-038955
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-038955: (5.826401092s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 14.656421ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-6htmh" [d16800d3-aec2-4631-8d52-da9f53dd8819] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005270977s
addons_test.go:415: (dbg) Run:  kubectl --context addons-038955 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (87.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 49.587648ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-038955 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-038955 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c128896b-c09b-4d7c-af6f-e1589234a9a5] Pending
helpers_test.go:344: "task-pv-pod" [c128896b-c09b-4d7c-af6f-e1589234a9a5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c128896b-c09b-4d7c-af6f-e1589234a9a5] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004199507s
addons_test.go:584: (dbg) Run:  kubectl --context addons-038955 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-038955 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-038955 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-038955 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-038955 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-038955 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-038955 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [758a7d92-4bfb-4393-87a4-9833aeae628d] Pending
helpers_test.go:344: "task-pv-pod-restore" [758a7d92-4bfb-4393-87a4-9833aeae628d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [758a7d92-4bfb-4393-87a4-9833aeae628d] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004151694s
addons_test.go:626: (dbg) Run:  kubectl --context addons-038955 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-038955 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-038955 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-038955 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.761904239s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (87.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-038955 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-cmwlm" [409db5be-0c2f-4deb-b4fb-62204e4fd96b] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-cmwlm" [409db5be-0c2f-4deb-b4fb-62204e4fd96b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-cmwlm" [409db5be-0c2f-4deb-b4fb-62204e4fd96b] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003386849s
--- PASS: TestAddons/parallel/Headlamp (10.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-xkxz6" [73a4735a-4953-4213-b807-de2a4c5f674c] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004356135s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-038955
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-038955 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-038955 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-038955 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7234eae9-917d-4e1e-bbe8-857ee00ac025] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7234eae9-917d-4e1e-bbe8-857ee00ac025] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7234eae9-917d-4e1e-bbe8-857ee00ac025] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003568883s
addons_test.go:891: (dbg) Run:  kubectl --context addons-038955 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 ssh "cat /opt/local-path-provisioner/pvc-297c0bc1-5533-4fde-9714-442aef1c05e3_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-038955 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-038955 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-038955 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-038955 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.171527248s)
--- PASS: TestAddons/parallel/LocalPath (52.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mhg4z" [6c0fc97e-3cef-4840-9b64-155d69d9d548] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004392412s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-038955
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-djbbj" [98276ebc-a1ac-4c52-91a2-cb045b163834] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00431877s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-038955 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-038955 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-038955
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-038955: (11.986686689s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-038955
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-038955
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-038955
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestCertOptions (37.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-427136 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-427136 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.703802242s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-427136 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-427136 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-427136 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-427136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-427136
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-427136: (1.951718584s)
--- PASS: TestCertOptions (37.31s)

                                                
                                    
x
+
TestCertExpiration (231.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-022201 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-022201 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.827499938s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-022201 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-022201 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.162182487s)
helpers_test.go:175: Cleaning up "cert-expiration-022201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-022201
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-022201: (2.321951033s)
--- PASS: TestCertExpiration (231.31s)

                                                
                                    
x
+
TestForceSystemdFlag (42.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-471739 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-471739 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.034557085s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-471739 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-471739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-471739
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-471739: (2.153787797s)
--- PASS: TestForceSystemdFlag (42.54s)

                                                
                                    
x
+
TestForceSystemdEnv (39.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-749206 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-749206 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.01368518s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-749206 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-749206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-749206
E0408 19:25:50.213983  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-749206: (2.115679185s)
--- PASS: TestForceSystemdEnv (39.53s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.98s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-046021 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-046021 --driver=docker  --container-runtime=containerd: (31.049000787s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-046021"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-046021": (1.071177423s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bQIiIVRwmN0Q/agent.862202" SSH_AGENT_PID="862203" DOCKER_HOST=ssh://docker@127.0.0.1:33570 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bQIiIVRwmN0Q/agent.862202" SSH_AGENT_PID="862203" DOCKER_HOST=ssh://docker@127.0.0.1:33570 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bQIiIVRwmN0Q/agent.862202" SSH_AGENT_PID="862203" DOCKER_HOST=ssh://docker@127.0.0.1:33570 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.306156129s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bQIiIVRwmN0Q/agent.862202" SSH_AGENT_PID="862203" DOCKER_HOST=ssh://docker@127.0.0.1:33570 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-046021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-046021
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-046021: (1.952614326s)
--- PASS: TestDockerEnvContainerd (46.98s)

                                                
                                    
x
+
TestErrorSpam/setup (30.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-151387 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-151387 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-151387 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-151387 --driver=docker  --container-runtime=containerd: (30.244707211s)
--- PASS: TestErrorSpam/setup (30.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (2.01s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 pause
--- PASS: TestErrorSpam/pause (2.01s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 stop: (1.288687558s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-151387 --log_dir /tmp/nospam-151387 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18585-838483/.minikube/files/etc/test/nested/copy/843900/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-435105 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0408 18:50:50.214962  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:50:50.221656  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:50:50.232008  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:50:50.252368  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:50:50.292655  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:50:50.372936  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:50:50.533215  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:50:50.853873  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:50:51.496285  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:50:52.777267  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:50:55.337497  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:51:00.458546  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:51:10.699569  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 18:51:31.180133  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-435105 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m24.344345682s)
--- PASS: TestFunctional/serial/StartWithProxy (84.35s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-435105 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-435105 --alsologtostderr -v=8: (5.571495158s)
functional_test.go:659: soft start took 5.573669184s for "functional-435105" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-435105 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-435105 cache add registry.k8s.io/pause:3.1: (1.464385811s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-435105 cache add registry.k8s.io/pause:3.3: (1.274136034s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-435105 cache add registry.k8s.io/pause:latest: (1.181515575s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-435105 /tmp/TestFunctionalserialCacheCmdcacheadd_local2505432520/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 cache add minikube-local-cache-test:functional-435105
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 cache delete minikube-local-cache-test:functional-435105
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-435105
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-435105 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.671186ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-435105 cache reload: (1.120338418s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 kubectl -- --context functional-435105 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-435105 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (55.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-435105 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0408 18:52:12.140329  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-435105 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (55.218424781s)
functional_test.go:757: restart took 55.218527901s for "functional-435105" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (55.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-435105 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-435105 logs: (1.70018323s)
--- PASS: TestFunctional/serial/LogsCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 logs --file /tmp/TestFunctionalserialLogsFileCmd1770198916/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-435105 logs --file /tmp/TestFunctionalserialLogsFileCmd1770198916/001/logs.txt: (1.842779689s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-435105 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-435105
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-435105: exit status 115 (618.677876ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31952 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-435105 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-435105 config get cpus: exit status 14 (104.486445ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-435105 config get cpus: exit status 14 (97.279224ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-435105 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-435105 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 877501: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-435105 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-435105 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (190.209348ms)

                                                
                                                
-- stdout --
	* [functional-435105] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:53:30.304502  876651 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:53:30.306174  876651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:53:30.306218  876651 out.go:304] Setting ErrFile to fd 2...
	I0408 18:53:30.306239  876651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:53:30.306616  876651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 18:53:30.308273  876651 out.go:298] Setting JSON to false
	I0408 18:53:30.309376  876651 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12955,"bootTime":1712589456,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0408 18:53:30.309455  876651 start.go:139] virtualization:  
	I0408 18:53:30.312184  876651 out.go:177] * [functional-435105] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0408 18:53:30.314623  876651 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 18:53:30.316696  876651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:53:30.314853  876651 notify.go:220] Checking for updates...
	I0408 18:53:30.321326  876651 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 18:53:30.323153  876651 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	I0408 18:53:30.324928  876651 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0408 18:53:30.326990  876651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 18:53:30.329237  876651 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:53:30.329812  876651 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 18:53:30.348598  876651 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0408 18:53:30.348709  876651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 18:53:30.415607  876651 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-08 18:53:30.402507206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 18:53:30.415714  876651 docker.go:295] overlay module found
	I0408 18:53:30.418218  876651 out.go:177] * Using the docker driver based on existing profile
	I0408 18:53:30.420552  876651 start.go:297] selected driver: docker
	I0408 18:53:30.420570  876651 start.go:901] validating driver "docker" against &{Name:functional-435105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-435105 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:53:30.420695  876651 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 18:53:30.423970  876651 out.go:177] 
	W0408 18:53:30.425588  876651 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0408 18:53:30.427537  876651 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-435105 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-435105 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-435105 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (234.885229ms)

                                                
                                                
-- stdout --
	* [functional-435105] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:53:33.142474  877230 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:53:33.142716  877230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:53:33.142748  877230 out.go:304] Setting ErrFile to fd 2...
	I0408 18:53:33.142767  877230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:53:33.143702  877230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 18:53:33.144153  877230 out.go:298] Setting JSON to false
	I0408 18:53:33.145134  877230 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12958,"bootTime":1712589456,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0408 18:53:33.145267  877230 start.go:139] virtualization:  
	I0408 18:53:33.148010  877230 out.go:177] * [functional-435105] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	I0408 18:53:33.150430  877230 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 18:53:33.152239  877230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:53:33.150574  877230 notify.go:220] Checking for updates...
	I0408 18:53:33.154346  877230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 18:53:33.156254  877230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	I0408 18:53:33.158040  877230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0408 18:53:33.159768  877230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 18:53:33.161870  877230 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:53:33.162492  877230 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 18:53:33.183015  877230 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0408 18:53:33.183121  877230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 18:53:33.283154  877230 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-08 18:53:33.272930487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 18:53:33.283261  877230 docker.go:295] overlay module found
	I0408 18:53:33.291276  877230 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0408 18:53:33.293656  877230 start.go:297] selected driver: docker
	I0408 18:53:33.293678  877230 start.go:901] validating driver "docker" against &{Name:functional-435105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-435105 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:53:33.293775  877230 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 18:53:33.297213  877230 out.go:177] 
	W0408 18:53:33.299656  877230 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0408 18:53:33.302404  877230 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-435105 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-435105 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-24jks" [8f09abca-79a2-420c-9116-f9b866bc1b0b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-24jks" [8f09abca-79a2-420c-9116-f9b866bc1b0b] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003605301s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31267
functional_test.go:1671: http://192.168.49.2:31267: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-24jks

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31267
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1cb543b0-e3d8-4d97-ae80-70d81fa023f2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004918355s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-435105 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-435105 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-435105 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-435105 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c6c2078c-ec61-42fb-a468-321c8f662d10] Pending
helpers_test.go:344: "sp-pod" [c6c2078c-ec61-42fb-a468-321c8f662d10] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c6c2078c-ec61-42fb-a468-321c8f662d10] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004299448s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-435105 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-435105 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-435105 delete -f testdata/storage-provisioner/pod.yaml: (1.185599784s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-435105 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8e46b3e4-b863-4f2f-bfcf-08606a81e968] Pending
helpers_test.go:344: "sp-pod" [8e46b3e4-b863-4f2f-bfcf-08606a81e968] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0043984s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-435105 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh -n functional-435105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 cp functional-435105:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2667111206/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh -n functional-435105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh -n functional-435105 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/843900/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "sudo cat /etc/test/nested/copy/843900/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/843900.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "sudo cat /etc/ssl/certs/843900.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/843900.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "sudo cat /usr/share/ca-certificates/843900.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/8439002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "sudo cat /etc/ssl/certs/8439002.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/8439002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "sudo cat /usr/share/ca-certificates/8439002.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-435105 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-435105 ssh "sudo systemctl is-active docker": exit status 1 (315.205113ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-435105 ssh "sudo systemctl is-active crio": exit status 1 (335.184767ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 version -o=json --components
2024/04/08 18:53:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-435105 version -o=json --components: (1.228019753s)
--- PASS: TestFunctional/parallel/Version/components (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-435105 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-435105
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-435105 image ls --format short --alsologtostderr:
I0408 18:53:43.594813  878739 out.go:291] Setting OutFile to fd 1 ...
I0408 18:53:43.595000  878739 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:53:43.595030  878739 out.go:304] Setting ErrFile to fd 2...
I0408 18:53:43.595051  878739 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:53:43.595323  878739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
I0408 18:53:43.596060  878739 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:53:43.596225  878739 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:53:43.596848  878739 cli_runner.go:164] Run: docker container inspect functional-435105 --format={{.State.Status}}
I0408 18:53:43.618303  878739 ssh_runner.go:195] Run: systemctl --version
I0408 18:53:43.618360  878739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-435105
I0408 18:53:43.636400  878739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33580 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/functional-435105/id_rsa Username:docker}
I0408 18:53:43.744035  878739 ssh_runner.go:195] Run: sudo crictl images --output json
W0408 18:53:43.792776  878739 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 904848ef-cf73-4523-ac9e-7227d04cdcbc
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-435105 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-435105  | sha256:34ad26 | 990B   |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:258111 | 32.1MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:121d70 | 30.6MB |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:4b51f9 | 16.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:014faa | 66.2MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| docker.io/library/nginx                     | latest             | sha256:070027 | 67.2MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| docker.io/library/nginx                     | alpine             | sha256:b8c826 | 17.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:0e9b4a | 25MB   |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-435105 image ls --format table --alsologtostderr:
I0408 18:53:44.286514  878902 out.go:291] Setting OutFile to fd 1 ...
I0408 18:53:44.286627  878902 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:53:44.286637  878902 out.go:304] Setting ErrFile to fd 2...
I0408 18:53:44.286642  878902 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:53:44.286888  878902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
I0408 18:53:44.287475  878902 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:53:44.287596  878902 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:53:44.288058  878902 cli_runner.go:164] Run: docker container inspect functional-435105 --format={{.State.Status}}
I0408 18:53:44.314634  878902 ssh_runner.go:195] Run: systemctl --version
I0408 18:53:44.314686  878902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-435105
I0408 18:53:44.335252  878902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33580 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/functional-435105/id_rsa Username:docker}
I0408 18:53:44.434725  878902 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-435105 image ls --format json --alsologtostderr:
[{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"66189079"},{"id":"sha256:34ad26a604121b69f20aa68fd00a172ae1e2b1fb6c832448dfb67f30807fdee7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-435105"],"size":"990"},{"id":"sha256:a422e0e98235
6f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0","repoDigests":["docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601398"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8
034419"},{"id":"sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"32143347"},{"id":"sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"25039677"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764
febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216851"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manag
er:v1.29.3"],"size":"30578527"},{"id":"sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"16931371"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-435105 image ls --format json --alsologtostderr:
I0408 18:53:43.988842  878832 out.go:291] Setting OutFile to fd 1 ...
I0408 18:53:43.988977  878832 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:53:43.988982  878832 out.go:304] Setting ErrFile to fd 2...
I0408 18:53:43.988987  878832 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:53:43.990292  878832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
I0408 18:53:43.991133  878832 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:53:43.991529  878832 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:53:43.992474  878832 cli_runner.go:164] Run: docker container inspect functional-435105 --format={{.State.Status}}
I0408 18:53:44.019787  878832 ssh_runner.go:195] Run: systemctl --version
I0408 18:53:44.019850  878832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-435105
I0408 18:53:44.045534  878832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33580 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/functional-435105/id_rsa Username:docker}
I0408 18:53:44.142436  878832 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-435105 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "67216851"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "66189079"
- id: sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "32143347"
- id: sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "16931371"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:34ad26a604121b69f20aa68fd00a172ae1e2b1fb6c832448dfb67f30807fdee7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-435105
size: "990"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "25039677"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0
repoDigests:
- docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742
repoTags:
- docker.io/library/nginx:alpine
size: "17601398"
- id: sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "30578527"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-435105 image ls --format yaml --alsologtostderr:
I0408 18:53:43.682361  878774 out.go:291] Setting OutFile to fd 1 ...
I0408 18:53:43.682580  878774 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:53:43.682602  878774 out.go:304] Setting ErrFile to fd 2...
I0408 18:53:43.682621  878774 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:53:43.682887  878774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
I0408 18:53:43.683500  878774 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:53:43.683679  878774 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:53:43.684238  878774 cli_runner.go:164] Run: docker container inspect functional-435105 --format={{.State.Status}}
I0408 18:53:43.705237  878774 ssh_runner.go:195] Run: systemctl --version
I0408 18:53:43.705295  878774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-435105
I0408 18:53:43.721770  878774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33580 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/functional-435105/id_rsa Username:docker}
I0408 18:53:43.819048  878774 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-435105 ssh pgrep buildkitd: exit status 1 (366.933639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image build -t localhost/my-image:functional-435105 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-435105 image build -t localhost/my-image:functional-435105 testdata/build --alsologtostderr: (2.625890742s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-435105 image build -t localhost/my-image:functional-435105 testdata/build --alsologtostderr:
I0408 18:53:44.242905  878897 out.go:291] Setting OutFile to fd 1 ...
I0408 18:53:44.243693  878897 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:53:44.243726  878897 out.go:304] Setting ErrFile to fd 2...
I0408 18:53:44.243746  878897 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:53:44.244032  878897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
I0408 18:53:44.244672  878897 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:53:44.245945  878897 config.go:182] Loaded profile config "functional-435105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:53:44.246561  878897 cli_runner.go:164] Run: docker container inspect functional-435105 --format={{.State.Status}}
I0408 18:53:44.277574  878897 ssh_runner.go:195] Run: systemctl --version
I0408 18:53:44.277626  878897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-435105
I0408 18:53:44.306469  878897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33580 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/functional-435105/id_rsa Username:docker}
I0408 18:53:44.410374  878897 build_images.go:161] Building image from path: /tmp/build.2729349044.tar
I0408 18:53:44.410446  878897 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0408 18:53:44.420131  878897 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2729349044.tar
I0408 18:53:44.423490  878897 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2729349044.tar: stat -c "%s %y" /var/lib/minikube/build/build.2729349044.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2729349044.tar': No such file or directory
I0408 18:53:44.423518  878897 ssh_runner.go:362] scp /tmp/build.2729349044.tar --> /var/lib/minikube/build/build.2729349044.tar (3072 bytes)
I0408 18:53:44.449508  878897 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2729349044
I0408 18:53:44.459396  878897 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2729349044 -xf /var/lib/minikube/build/build.2729349044.tar
I0408 18:53:44.479520  878897 containerd.go:394] Building image: /var/lib/minikube/build/build.2729349044
I0408 18:53:44.479661  878897 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2729349044 --local dockerfile=/var/lib/minikube/build/build.2729349044 --output type=image,name=localhost/my-image:functional-435105
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:7f1600c0cc2ee8994ac03819ce7611dae0fea8a0434710ac079e2884fead0432 0.0s done
#8 exporting config sha256:76bc49579c005124578c259601e9745958936483db55bdd9024b0c924ce70ecf 0.0s done
#8 naming to localhost/my-image:functional-435105 done
#8 DONE 0.2s
I0408 18:53:46.762356  878897 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2729349044 --local dockerfile=/var/lib/minikube/build/build.2729349044 --output type=image,name=localhost/my-image:functional-435105: (2.28264247s)
I0408 18:53:46.762424  878897 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2729349044
I0408 18:53:46.772567  878897 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2729349044.tar
I0408 18:53:46.782105  878897 build_images.go:217] Built localhost/my-image:functional-435105 from /tmp/build.2729349044.tar
I0408 18:53:46.782173  878897 build_images.go:133] succeeded building to: functional-435105
I0408 18:53:46.782185  878897 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.738116317s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-435105
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "429.960864ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "84.823682ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "405.760343ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "77.524852ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-435105 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-435105 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-435105 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-435105 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 874866: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-435105 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-435105 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [874e4ad6-01ce-49b0-b455-e1b868b73c1e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [874e4ad6-01ce-49b0-b455-e1b868b73c1e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004123355s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-435105 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.125.153 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-435105 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image rm gcr.io/google-containers/addon-resizer:functional-435105 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-435105
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 image save --daemon gcr.io/google-containers/addon-resizer:functional-435105 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-435105
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-435105 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-435105 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-hd42n" [0c8a7a25-ac73-4525-acbb-61cafd90e2a0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-hd42n" [0c8a7a25-ac73-4525-acbb-61cafd90e2a0] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004318307s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 service list -o json
functional_test.go:1490: Took "523.625296ms" to run "out/minikube-linux-arm64 -p functional-435105 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30506
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30506
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-435105 /tmp/TestFunctionalparallelMountCmdany-port3192672599/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1712602410721132121" to /tmp/TestFunctionalparallelMountCmdany-port3192672599/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1712602410721132121" to /tmp/TestFunctionalparallelMountCmdany-port3192672599/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1712602410721132121" to /tmp/TestFunctionalparallelMountCmdany-port3192672599/001/test-1712602410721132121
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-435105 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (368.049159ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  8 18:53 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  8 18:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  8 18:53 test-1712602410721132121
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh cat /mount-9p/test-1712602410721132121
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-435105 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c497aaa2-b484-44d2-b6aa-79028a927f6c] Pending
helpers_test.go:344: "busybox-mount" [c497aaa2-b484-44d2-b6aa-79028a927f6c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0408 18:53:34.061323  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [c497aaa2-b484-44d2-b6aa-79028a927f6c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c497aaa2-b484-44d2-b6aa-79028a927f6c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004752016s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-435105 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-435105 /tmp/TestFunctionalparallelMountCmdany-port3192672599/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-435105 /tmp/TestFunctionalparallelMountCmdspecific-port888286787/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-435105 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (425.716468ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-435105 /tmp/TestFunctionalparallelMountCmdspecific-port888286787/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-435105 ssh "sudo umount -f /mount-9p": exit status 1 (344.679237ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-435105 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-435105 /tmp/TestFunctionalparallelMountCmdspecific-port888286787/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-435105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup505431880/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-435105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup505431880/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-435105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup505431880/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-435105 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-435105 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-435105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup505431880/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-435105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup505431880/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-435105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup505431880/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-435105
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-435105
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-435105
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (137.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-550237 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0408 18:55:50.214040  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-550237 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m16.272420819s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (137.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (18.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- rollout status deployment/busybox
E0408 18:56:17.904445  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-550237 -- rollout status deployment/busybox: (15.408335733s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-5b7vk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-7dp68 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-9df9p -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-5b7vk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-7dp68 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-9df9p -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-5b7vk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-7dp68 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-9df9p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (18.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-5b7vk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-5b7vk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-7dp68 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-7dp68 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-9df9p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-550237 -- exec busybox-7fdf7869d9-9df9p -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-550237 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-550237 -v=7 --alsologtostderr: (23.367930053s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-550237 status -v=7 --alsologtostderr: (1.026301902s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-550237 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp testdata/cp-test.txt ha-550237:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3994886250/001/cp-test_ha-550237.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237:/home/docker/cp-test.txt ha-550237-m02:/home/docker/cp-test_ha-550237_ha-550237-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m02 "sudo cat /home/docker/cp-test_ha-550237_ha-550237-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237:/home/docker/cp-test.txt ha-550237-m03:/home/docker/cp-test_ha-550237_ha-550237-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m03 "sudo cat /home/docker/cp-test_ha-550237_ha-550237-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237:/home/docker/cp-test.txt ha-550237-m04:/home/docker/cp-test_ha-550237_ha-550237-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m04 "sudo cat /home/docker/cp-test_ha-550237_ha-550237-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp testdata/cp-test.txt ha-550237-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3994886250/001/cp-test_ha-550237-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m02:/home/docker/cp-test.txt ha-550237:/home/docker/cp-test_ha-550237-m02_ha-550237.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237 "sudo cat /home/docker/cp-test_ha-550237-m02_ha-550237.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m02:/home/docker/cp-test.txt ha-550237-m03:/home/docker/cp-test_ha-550237-m02_ha-550237-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m03 "sudo cat /home/docker/cp-test_ha-550237-m02_ha-550237-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m02:/home/docker/cp-test.txt ha-550237-m04:/home/docker/cp-test_ha-550237-m02_ha-550237-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m04 "sudo cat /home/docker/cp-test_ha-550237-m02_ha-550237-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp testdata/cp-test.txt ha-550237-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3994886250/001/cp-test_ha-550237-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m03:/home/docker/cp-test.txt ha-550237:/home/docker/cp-test_ha-550237-m03_ha-550237.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237 "sudo cat /home/docker/cp-test_ha-550237-m03_ha-550237.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m03:/home/docker/cp-test.txt ha-550237-m02:/home/docker/cp-test_ha-550237-m03_ha-550237-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m02 "sudo cat /home/docker/cp-test_ha-550237-m03_ha-550237-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m03:/home/docker/cp-test.txt ha-550237-m04:/home/docker/cp-test_ha-550237-m03_ha-550237-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m04 "sudo cat /home/docker/cp-test_ha-550237-m03_ha-550237-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp testdata/cp-test.txt ha-550237-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3994886250/001/cp-test_ha-550237-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m04:/home/docker/cp-test.txt ha-550237:/home/docker/cp-test_ha-550237-m04_ha-550237.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237 "sudo cat /home/docker/cp-test_ha-550237-m04_ha-550237.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m04:/home/docker/cp-test.txt ha-550237-m02:/home/docker/cp-test_ha-550237-m04_ha-550237-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m02 "sudo cat /home/docker/cp-test_ha-550237-m04_ha-550237-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 cp ha-550237-m04:/home/docker/cp-test.txt ha-550237-m03:/home/docker/cp-test_ha-550237-m04_ha-550237-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 ssh -n ha-550237-m03 "sudo cat /home/docker/cp-test_ha-550237-m04_ha-550237-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-550237 node stop m02 -v=7 --alsologtostderr: (12.103433617s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-550237 status -v=7 --alsologtostderr: exit status 7 (759.138099ms)

                                                
                                                
-- stdout --
	ha-550237
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-550237-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-550237-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-550237-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:57:24.166700  894348 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:57:24.166842  894348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:57:24.166852  894348 out.go:304] Setting ErrFile to fd 2...
	I0408 18:57:24.166857  894348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:57:24.167094  894348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 18:57:24.167290  894348 out.go:298] Setting JSON to false
	I0408 18:57:24.167322  894348 mustload.go:65] Loading cluster: ha-550237
	I0408 18:57:24.167433  894348 notify.go:220] Checking for updates...
	I0408 18:57:24.167776  894348 config.go:182] Loaded profile config "ha-550237": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:57:24.167794  894348 status.go:255] checking status of ha-550237 ...
	I0408 18:57:24.168682  894348 cli_runner.go:164] Run: docker container inspect ha-550237 --format={{.State.Status}}
	I0408 18:57:24.187820  894348 status.go:330] ha-550237 host status = "Running" (err=<nil>)
	I0408 18:57:24.187850  894348 host.go:66] Checking if "ha-550237" exists ...
	I0408 18:57:24.188252  894348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550237
	I0408 18:57:24.208571  894348 host.go:66] Checking if "ha-550237" exists ...
	I0408 18:57:24.208939  894348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:57:24.209020  894348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550237
	I0408 18:57:24.237811  894348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33585 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/ha-550237/id_rsa Username:docker}
	I0408 18:57:24.347490  894348 ssh_runner.go:195] Run: systemctl --version
	I0408 18:57:24.351910  894348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:57:24.364874  894348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 18:57:24.426830  894348 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:72 SystemTime:2024-04-08 18:57:24.416623385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 18:57:24.427424  894348 kubeconfig.go:125] found "ha-550237" server: "https://192.168.49.254:8443"
	I0408 18:57:24.427512  894348 api_server.go:166] Checking apiserver status ...
	I0408 18:57:24.427566  894348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 18:57:24.440043  894348 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1430/cgroup
	I0408 18:57:24.451068  894348 api_server.go:182] apiserver freezer: "8:freezer:/docker/f7cbcc0ecc11bfa07acad5479c94e73d3c185739ff0b438a6c08fa5b25ed1432/kubepods/burstable/podfb75ef9f990381039c51d9de723294d3/acfeac5d197791c2d5f9d24feaa2e199908999a4a7dd24bf344223be7679c378"
	I0408 18:57:24.451185  894348 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f7cbcc0ecc11bfa07acad5479c94e73d3c185739ff0b438a6c08fa5b25ed1432/kubepods/burstable/podfb75ef9f990381039c51d9de723294d3/acfeac5d197791c2d5f9d24feaa2e199908999a4a7dd24bf344223be7679c378/freezer.state
	I0408 18:57:24.461295  894348 api_server.go:204] freezer state: "THAWED"
	I0408 18:57:24.461331  894348 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0408 18:57:24.470653  894348 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0408 18:57:24.470686  894348 status.go:422] ha-550237 apiserver status = Running (err=<nil>)
	I0408 18:57:24.470698  894348 status.go:257] ha-550237 status: &{Name:ha-550237 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:57:24.470717  894348 status.go:255] checking status of ha-550237-m02 ...
	I0408 18:57:24.471046  894348 cli_runner.go:164] Run: docker container inspect ha-550237-m02 --format={{.State.Status}}
	I0408 18:57:24.487047  894348 status.go:330] ha-550237-m02 host status = "Stopped" (err=<nil>)
	I0408 18:57:24.487071  894348 status.go:343] host is not running, skipping remaining checks
	I0408 18:57:24.487078  894348 status.go:257] ha-550237-m02 status: &{Name:ha-550237-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:57:24.487100  894348 status.go:255] checking status of ha-550237-m03 ...
	I0408 18:57:24.487415  894348 cli_runner.go:164] Run: docker container inspect ha-550237-m03 --format={{.State.Status}}
	I0408 18:57:24.502387  894348 status.go:330] ha-550237-m03 host status = "Running" (err=<nil>)
	I0408 18:57:24.502409  894348 host.go:66] Checking if "ha-550237-m03" exists ...
	I0408 18:57:24.502690  894348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550237-m03
	I0408 18:57:24.521867  894348 host.go:66] Checking if "ha-550237-m03" exists ...
	I0408 18:57:24.522335  894348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:57:24.522387  894348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550237-m03
	I0408 18:57:24.538876  894348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33595 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/ha-550237-m03/id_rsa Username:docker}
	I0408 18:57:24.636334  894348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:57:24.649544  894348 kubeconfig.go:125] found "ha-550237" server: "https://192.168.49.254:8443"
	I0408 18:57:24.649575  894348 api_server.go:166] Checking apiserver status ...
	I0408 18:57:24.649616  894348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 18:57:24.663008  894348 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup
	I0408 18:57:24.673648  894348 api_server.go:182] apiserver freezer: "8:freezer:/docker/3e776bf69edd18dbda67665b8efbf06ab9e7a4dfab339ccdaec3a4b9e034ca1c/kubepods/burstable/pode1303fa6b84bde1045a8c62cc17298a1/2eb0f69ae0a98f071ae794f418b0c2c0ae5cb499df9beb73675b34323e88bc72"
	I0408 18:57:24.673815  894348 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3e776bf69edd18dbda67665b8efbf06ab9e7a4dfab339ccdaec3a4b9e034ca1c/kubepods/burstable/pode1303fa6b84bde1045a8c62cc17298a1/2eb0f69ae0a98f071ae794f418b0c2c0ae5cb499df9beb73675b34323e88bc72/freezer.state
	I0408 18:57:24.683762  894348 api_server.go:204] freezer state: "THAWED"
	I0408 18:57:24.683791  894348 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0408 18:57:24.691844  894348 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0408 18:57:24.691876  894348 status.go:422] ha-550237-m03 apiserver status = Running (err=<nil>)
	I0408 18:57:24.691886  894348 status.go:257] ha-550237-m03 status: &{Name:ha-550237-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:57:24.691903  894348 status.go:255] checking status of ha-550237-m04 ...
	I0408 18:57:24.692193  894348 cli_runner.go:164] Run: docker container inspect ha-550237-m04 --format={{.State.Status}}
	I0408 18:57:24.708639  894348 status.go:330] ha-550237-m04 host status = "Running" (err=<nil>)
	I0408 18:57:24.708667  894348 host.go:66] Checking if "ha-550237-m04" exists ...
	I0408 18:57:24.708977  894348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550237-m04
	I0408 18:57:24.724041  894348 host.go:66] Checking if "ha-550237-m04" exists ...
	I0408 18:57:24.724351  894348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:57:24.724462  894348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550237-m04
	I0408 18:57:24.739937  894348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33600 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/ha-550237-m04/id_rsa Username:docker}
	I0408 18:57:24.835237  894348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:57:24.847785  894348 status.go:257] ha-550237-m04 status: &{Name:ha-550237-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-550237 node start m02 -v=7 --alsologtostderr: (17.94437906s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (141.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-550237 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-550237 -v=7 --alsologtostderr
E0408 18:57:58.923929  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:57:58.929225  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:57:58.939530  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:57:58.959824  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:57:59.000179  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:57:59.080765  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:57:59.241102  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:57:59.561394  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:58:00.201925  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:58:01.482206  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:58:04.043150  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:58:09.163559  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:58:19.403840  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-550237 -v=7 --alsologtostderr: (37.405666145s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-550237 --wait=true -v=7 --alsologtostderr
E0408 18:58:39.884269  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 18:59:20.844473  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-550237 --wait=true -v=7 --alsologtostderr: (1m44.020614823s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-550237
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (141.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-550237 node delete m03 -v=7 --alsologtostderr: (10.33297s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 stop -v=7 --alsologtostderr
E0408 19:00:42.766154  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 19:00:50.214097  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-550237 stop -v=7 --alsologtostderr: (35.964575485s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-550237 status -v=7 --alsologtostderr: exit status 7 (116.765329ms)

                                                
                                                
-- stdout --
	ha-550237
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-550237-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-550237-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:00:54.864400  908021 out.go:291] Setting OutFile to fd 1 ...
	I0408 19:00:54.864797  908021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:00:54.864831  908021 out.go:304] Setting ErrFile to fd 2...
	I0408 19:00:54.864852  908021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:00:54.865143  908021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 19:00:54.865391  908021 out.go:298] Setting JSON to false
	I0408 19:00:54.865452  908021 mustload.go:65] Loading cluster: ha-550237
	I0408 19:00:54.865534  908021 notify.go:220] Checking for updates...
	I0408 19:00:54.866823  908021 config.go:182] Loaded profile config "ha-550237": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:00:54.866960  908021 status.go:255] checking status of ha-550237 ...
	I0408 19:00:54.868051  908021 cli_runner.go:164] Run: docker container inspect ha-550237 --format={{.State.Status}}
	I0408 19:00:54.883634  908021 status.go:330] ha-550237 host status = "Stopped" (err=<nil>)
	I0408 19:00:54.883659  908021 status.go:343] host is not running, skipping remaining checks
	I0408 19:00:54.883667  908021 status.go:257] ha-550237 status: &{Name:ha-550237 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 19:00:54.883708  908021 status.go:255] checking status of ha-550237-m02 ...
	I0408 19:00:54.884017  908021 cli_runner.go:164] Run: docker container inspect ha-550237-m02 --format={{.State.Status}}
	I0408 19:00:54.899346  908021 status.go:330] ha-550237-m02 host status = "Stopped" (err=<nil>)
	I0408 19:00:54.899367  908021 status.go:343] host is not running, skipping remaining checks
	I0408 19:00:54.899376  908021 status.go:257] ha-550237-m02 status: &{Name:ha-550237-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 19:00:54.899399  908021 status.go:255] checking status of ha-550237-m04 ...
	I0408 19:00:54.899699  908021 cli_runner.go:164] Run: docker container inspect ha-550237-m04 --format={{.State.Status}}
	I0408 19:00:54.914384  908021 status.go:330] ha-550237-m04 host status = "Stopped" (err=<nil>)
	I0408 19:00:54.914407  908021 status.go:343] host is not running, skipping remaining checks
	I0408 19:00:54.914415  908021 status.go:257] ha-550237-m04 status: &{Name:ha-550237-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-550237 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-550237 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.757102921s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-550237 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-550237 --control-plane -v=7 --alsologtostderr: (40.487657964s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-550237 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-744748 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0408 19:03:26.606412  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-744748 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (59.490434529s)
--- PASS: TestJSONOutput/start/Command (59.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-744748 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-744748 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-744748 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-744748 --output=json --user=testUser: (5.822289162s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-743179 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-743179 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.491623ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2290e888-a742-4e0b-b536-50b86bdbb26a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-743179] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"66bba428-4bc0-47d6-82c3-1fdb2bf4428f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18585"}}
	{"specversion":"1.0","id":"fa42fd1d-0e62-42ab-84f2-906c517ac4fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"758e57a4-cc4e-4e60-b2fe-7c585c8ad28f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig"}}
	{"specversion":"1.0","id":"2636c3cb-c250-4bb6-ace2-65fc243df5e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube"}}
	{"specversion":"1.0","id":"95abab5c-9f0f-44f1-9160-11140be37304","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b6bb98b8-9751-4edc-8b46-c3983b441ce0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f1eea665-2c66-4bca-ab80-3754c9359c8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-743179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-743179
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-317071 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-317071 --network=: (37.507550648s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-317071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-317071
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-317071: (2.090638782s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.62s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-986839 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-986839 --network=bridge: (35.386264475s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-986839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-986839
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-986839: (1.920745877s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.32s)

                                                
                                    
x
+
TestKicExistingNetwork (34.2s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-169869 --network=existing-network
E0408 19:05:50.214042  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-169869 --network=existing-network: (32.079821561s)
helpers_test.go:175: Cleaning up "existing-network-169869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-169869
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-169869: (1.974580665s)
--- PASS: TestKicExistingNetwork (34.20s)

                                                
                                    
x
+
TestKicCustomSubnet (32.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-717582 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-717582 --subnet=192.168.60.0/24: (30.105944695s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-717582 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-717582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-717582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-717582: (2.112465521s)
--- PASS: TestKicCustomSubnet (32.24s)

                                                
                                    
x
+
TestKicStaticIP (31.62s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-808287 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-808287 --static-ip=192.168.200.200: (29.357739014s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-808287 ip
helpers_test.go:175: Cleaning up "static-ip-808287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-808287
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-808287: (2.117169605s)
--- PASS: TestKicStaticIP (31.62s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (64.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-158959 --driver=docker  --container-runtime=containerd
E0408 19:07:13.264652  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-158959 --driver=docker  --container-runtime=containerd: (29.550116798s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-161700 --driver=docker  --container-runtime=containerd
E0408 19:07:58.926095  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-161700 --driver=docker  --container-runtime=containerd: (29.545558664s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-158959
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-161700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-161700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-161700
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-161700: (1.961257541s)
helpers_test.go:175: Cleaning up "first-158959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-158959
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-158959: (2.229163486s)
--- PASS: TestMinikubeProfile (64.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-430696 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-430696 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.851757751s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-430696 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-444423 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-444423 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.33305036s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-444423 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-430696 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-430696 --alsologtostderr -v=5: (1.583432335s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-444423 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-444423
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-444423: (1.215919112s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-444423
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-444423: (6.433478445s)
--- PASS: TestMountStart/serial/RestartStopped (7.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-444423 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-821774 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-821774 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m14.782628673s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-821774 -- rollout status deployment/busybox: (3.37144035s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- exec busybox-7fdf7869d9-r2f4p -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- exec busybox-7fdf7869d9-tfxgd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- exec busybox-7fdf7869d9-r2f4p -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- exec busybox-7fdf7869d9-tfxgd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- exec busybox-7fdf7869d9-r2f4p -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- exec busybox-7fdf7869d9-tfxgd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- exec busybox-7fdf7869d9-r2f4p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- exec busybox-7fdf7869d9-r2f4p -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- exec busybox-7fdf7869d9-tfxgd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821774 -- exec busybox-7fdf7869d9-tfxgd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-821774 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-821774 -v 3 --alsologtostderr: (18.649632206s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.31s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-821774 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp testdata/cp-test.txt multinode-821774:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp multinode-821774:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2661146245/001/cp-test_multinode-821774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp multinode-821774:/home/docker/cp-test.txt multinode-821774-m02:/home/docker/cp-test_multinode-821774_multinode-821774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m02 "sudo cat /home/docker/cp-test_multinode-821774_multinode-821774-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp multinode-821774:/home/docker/cp-test.txt multinode-821774-m03:/home/docker/cp-test_multinode-821774_multinode-821774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m03 "sudo cat /home/docker/cp-test_multinode-821774_multinode-821774-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp testdata/cp-test.txt multinode-821774-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp multinode-821774-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2661146245/001/cp-test_multinode-821774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp multinode-821774-m02:/home/docker/cp-test.txt multinode-821774:/home/docker/cp-test_multinode-821774-m02_multinode-821774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774 "sudo cat /home/docker/cp-test_multinode-821774-m02_multinode-821774.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp multinode-821774-m02:/home/docker/cp-test.txt multinode-821774-m03:/home/docker/cp-test_multinode-821774-m02_multinode-821774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m03 "sudo cat /home/docker/cp-test_multinode-821774-m02_multinode-821774-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp testdata/cp-test.txt multinode-821774-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp multinode-821774-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2661146245/001/cp-test_multinode-821774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp multinode-821774-m03:/home/docker/cp-test.txt multinode-821774:/home/docker/cp-test_multinode-821774-m03_multinode-821774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774 "sudo cat /home/docker/cp-test_multinode-821774-m03_multinode-821774.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 cp multinode-821774-m03:/home/docker/cp-test.txt multinode-821774-m02:/home/docker/cp-test_multinode-821774-m03_multinode-821774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 ssh -n multinode-821774-m02 "sudo cat /home/docker/cp-test_multinode-821774-m03_multinode-821774-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-821774 node stop m03: (1.230043011s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-821774 status: exit status 7 (511.441391ms)

                                                
                                                
-- stdout --
	multinode-821774
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-821774-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-821774-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-821774 status --alsologtostderr: exit status 7 (521.625764ms)

                                                
                                                
-- stdout --
	multinode-821774
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-821774-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-821774-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:10:36.789189  958013 out.go:291] Setting OutFile to fd 1 ...
	I0408 19:10:36.789303  958013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:10:36.789312  958013 out.go:304] Setting ErrFile to fd 2...
	I0408 19:10:36.789318  958013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:10:36.789572  958013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 19:10:36.789751  958013 out.go:298] Setting JSON to false
	I0408 19:10:36.789780  958013 mustload.go:65] Loading cluster: multinode-821774
	I0408 19:10:36.789875  958013 notify.go:220] Checking for updates...
	I0408 19:10:36.790233  958013 config.go:182] Loaded profile config "multinode-821774": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:10:36.790246  958013 status.go:255] checking status of multinode-821774 ...
	I0408 19:10:36.790785  958013 cli_runner.go:164] Run: docker container inspect multinode-821774 --format={{.State.Status}}
	I0408 19:10:36.808667  958013 status.go:330] multinode-821774 host status = "Running" (err=<nil>)
	I0408 19:10:36.808696  958013 host.go:66] Checking if "multinode-821774" exists ...
	I0408 19:10:36.808992  958013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-821774
	I0408 19:10:36.825192  958013 host.go:66] Checking if "multinode-821774" exists ...
	I0408 19:10:36.825520  958013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 19:10:36.825569  958013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-821774
	I0408 19:10:36.853324  958013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33705 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/multinode-821774/id_rsa Username:docker}
	I0408 19:10:36.951432  958013 ssh_runner.go:195] Run: systemctl --version
	I0408 19:10:36.956709  958013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 19:10:36.969634  958013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 19:10:37.038083  958013 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-08 19:10:37.027877254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 19:10:37.038823  958013 kubeconfig.go:125] found "multinode-821774" server: "https://192.168.67.2:8443"
	I0408 19:10:37.038874  958013 api_server.go:166] Checking apiserver status ...
	I0408 19:10:37.038932  958013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:10:37.051536  958013 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	I0408 19:10:37.061524  958013 api_server.go:182] apiserver freezer: "8:freezer:/docker/5fb7a59598c4567d120e702bc7b975a8437d6bf36a0fc8c5779292d08535aa0a/kubepods/burstable/pod2bf83919ccdedf6b83df532e00658b7c/27c8c849a0886a33a13bbd2de360136e53331e6098baab74f976f9117c407b4f"
	I0408 19:10:37.061600  958013 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5fb7a59598c4567d120e702bc7b975a8437d6bf36a0fc8c5779292d08535aa0a/kubepods/burstable/pod2bf83919ccdedf6b83df532e00658b7c/27c8c849a0886a33a13bbd2de360136e53331e6098baab74f976f9117c407b4f/freezer.state
	I0408 19:10:37.070333  958013 api_server.go:204] freezer state: "THAWED"
	I0408 19:10:37.070361  958013 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0408 19:10:37.078363  958013 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0408 19:10:37.078419  958013 status.go:422] multinode-821774 apiserver status = Running (err=<nil>)
	I0408 19:10:37.078431  958013 status.go:257] multinode-821774 status: &{Name:multinode-821774 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 19:10:37.078459  958013 status.go:255] checking status of multinode-821774-m02 ...
	I0408 19:10:37.078844  958013 cli_runner.go:164] Run: docker container inspect multinode-821774-m02 --format={{.State.Status}}
	I0408 19:10:37.094042  958013 status.go:330] multinode-821774-m02 host status = "Running" (err=<nil>)
	I0408 19:10:37.094063  958013 host.go:66] Checking if "multinode-821774-m02" exists ...
	I0408 19:10:37.094361  958013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-821774-m02
	I0408 19:10:37.116665  958013 host.go:66] Checking if "multinode-821774-m02" exists ...
	I0408 19:10:37.116993  958013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 19:10:37.117040  958013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-821774-m02
	I0408 19:10:37.132237  958013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33710 SSHKeyPath:/home/jenkins/minikube-integration/18585-838483/.minikube/machines/multinode-821774-m02/id_rsa Username:docker}
	I0408 19:10:37.227236  958013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 19:10:37.240475  958013 status.go:257] multinode-821774-m02 status: &{Name:multinode-821774-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0408 19:10:37.240508  958013 status.go:255] checking status of multinode-821774-m03 ...
	I0408 19:10:37.240818  958013 cli_runner.go:164] Run: docker container inspect multinode-821774-m03 --format={{.State.Status}}
	I0408 19:10:37.255655  958013 status.go:330] multinode-821774-m03 host status = "Stopped" (err=<nil>)
	I0408 19:10:37.255680  958013 status.go:343] host is not running, skipping remaining checks
	I0408 19:10:37.255687  958013 status.go:257] multinode-821774-m03 status: &{Name:multinode-821774-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-821774 node start m03 -v=7 --alsologtostderr: (8.414242793s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-821774
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-821774
E0408 19:10:50.213581  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-821774: (24.870692499s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-821774 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-821774 --wait=true -v=8 --alsologtostderr: (1m1.72634838s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-821774
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-821774 node delete m03: (4.707093095s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-821774 stop: (23.792375173s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-821774 status: exit status 7 (84.181245ms)

                                                
                                                
-- stdout --
	multinode-821774
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-821774-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-821774 status --alsologtostderr: exit status 7 (91.033929ms)

                                                
                                                
-- stdout --
	multinode-821774
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-821774-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:12:42.553920  965701 out.go:291] Setting OutFile to fd 1 ...
	I0408 19:12:42.554186  965701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:12:42.554219  965701 out.go:304] Setting ErrFile to fd 2...
	I0408 19:12:42.554238  965701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:12:42.554623  965701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 19:12:42.554885  965701 out.go:298] Setting JSON to false
	I0408 19:12:42.554927  965701 mustload.go:65] Loading cluster: multinode-821774
	I0408 19:12:42.555641  965701 config.go:182] Loaded profile config "multinode-821774": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:12:42.555677  965701 status.go:255] checking status of multinode-821774 ...
	I0408 19:12:42.556430  965701 cli_runner.go:164] Run: docker container inspect multinode-821774 --format={{.State.Status}}
	I0408 19:12:42.557443  965701 notify.go:220] Checking for updates...
	I0408 19:12:42.573568  965701 status.go:330] multinode-821774 host status = "Stopped" (err=<nil>)
	I0408 19:12:42.573592  965701 status.go:343] host is not running, skipping remaining checks
	I0408 19:12:42.573600  965701 status.go:257] multinode-821774 status: &{Name:multinode-821774 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 19:12:42.573626  965701 status.go:255] checking status of multinode-821774-m02 ...
	I0408 19:12:42.573946  965701 cli_runner.go:164] Run: docker container inspect multinode-821774-m02 --format={{.State.Status}}
	I0408 19:12:42.589626  965701 status.go:330] multinode-821774-m02 host status = "Stopped" (err=<nil>)
	I0408 19:12:42.589664  965701 status.go:343] host is not running, skipping remaining checks
	I0408 19:12:42.589673  965701 status.go:257] multinode-821774-m02 status: &{Name:multinode-821774-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-821774 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0408 19:12:58.925499  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-821774 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.476313026s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821774 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-821774
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-821774-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-821774-m02 --driver=docker  --container-runtime=containerd: exit status 14 (96.007579ms)

                                                
                                                
-- stdout --
	* [multinode-821774-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-821774-m02' is duplicated with machine name 'multinode-821774-m02' in profile 'multinode-821774'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-821774-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-821774-m03 --driver=docker  --container-runtime=containerd: (28.820116309s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-821774
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-821774: exit status 80 (306.438254ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-821774 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-821774-m03 already exists in multinode-821774-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-821774-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-821774-m03: (1.954883447s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.25s)

                                                
                                    
x
+
TestPreload (114.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-752141 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0408 19:14:21.966851  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-752141 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m14.091293896s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-752141 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-752141 image pull gcr.io/k8s-minikube/busybox: (1.27510964s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-752141
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-752141: (12.050811996s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-752141 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0408 19:15:50.214060  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-752141 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (23.990703535s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-752141 image list
helpers_test.go:175: Cleaning up "test-preload-752141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-752141
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-752141: (2.602432984s)
--- PASS: TestPreload (114.32s)

                                                
                                    
x
+
TestInsufficientStorage (12.99s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-273370 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-273370 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.390238508s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"029413bf-c558-46f6-aec2-7c2e3a2ac7e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-273370] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"797e9da3-fa39-4747-a379-8d313aa1281f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18585"}}
	{"specversion":"1.0","id":"7d21411d-0bcb-43b0-b035-aa674a8ffcd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0279e16b-b6e2-42d0-96ba-a75132c718f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig"}}
	{"specversion":"1.0","id":"5b50e83e-802d-4530-8daf-de315cb04c3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube"}}
	{"specversion":"1.0","id":"654b62f1-2a47-44f6-8921-63656243fa45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ce7cac6e-9748-4134-adb3-dd989dd7c01b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b176771d-f411-4874-ae28-8a17bdca200f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f7a8ce96-4ec1-46e9-8a85-77a3d8cd2fd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8ea473f0-d5c9-49e8-b207-0808c1451f08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a8e34f3-bcb7-4655-a5fa-98c2b0c6a791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fc65b8ba-507b-4f2c-9c28-fdf46aa53808","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-273370\" primary control-plane node in \"insufficient-storage-273370\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"534d59d3-9ef4-4e5d-b6f0-7e64695de5d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1712593525-18585 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"db0fcf30-db15-42de-a778-ef4066117f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7179ef98-b913-48d3-ad9b-c17e95830e3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-273370 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-273370 --output=json --layout=cluster: exit status 7 (299.455871ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-273370","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-273370","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 19:16:49.763296  982605 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-273370" does not appear in /home/jenkins/minikube-integration/18585-838483/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-273370 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-273370 --output=json --layout=cluster: exit status 7 (305.03071ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-273370","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-273370","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 19:16:50.067413  982661 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-273370" does not appear in /home/jenkins/minikube-integration/18585-838483/kubeconfig
	E0408 19:16:50.077557  982661 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/insufficient-storage-273370/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-273370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-273370
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-273370: (1.99842598s)
--- PASS: TestInsufficientStorage (12.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (86.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2265173403 start -p running-upgrade-928809 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2265173403 start -p running-upgrade-928809 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.944505563s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-928809 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0408 19:22:58.924529  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-928809 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.358069884s)
helpers_test.go:175: Cleaning up "running-upgrade-928809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-928809
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-928809: (2.263788951s)
--- PASS: TestRunningBinaryUpgrade (86.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (379.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-758643 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-758643 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (57.19941796s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-758643
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-758643: (1.235582902s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-758643 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-758643 status --format={{.Host}}: exit status 7 (79.168042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-758643 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-758643 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m3.54158157s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-758643 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-758643 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-758643 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (90.210819ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-758643] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-758643
	    minikube start -p kubernetes-upgrade-758643 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7586432 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-758643 --kubernetes-version=v1.30.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-758643 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-758643 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (14.530135255s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-758643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-758643
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-758643: (2.579283095s)
--- PASS: TestKubernetesUpgrade (379.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (173.82s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3720935146 start -p missing-upgrade-369449 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3720935146 start -p missing-upgrade-369449 --memory=2200 --driver=docker  --container-runtime=containerd: (1m27.766489054s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-369449
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-369449: (10.328855308s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-369449
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-369449 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-369449 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m10.791215113s)
helpers_test.go:175: Cleaning up "missing-upgrade-369449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-369449
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-369449: (2.471317578s)
--- PASS: TestMissingContainerUpgrade (173.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-694093 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-694093 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (94.254182ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-694093] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-694093 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-694093 --driver=docker  --container-runtime=containerd: (41.027077914s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-694093 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-694093 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-694093 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.939550521s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-694093 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-694093 status -o json: exit status 2 (302.934418ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-694093","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-694093
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-694093: (1.84898976s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-694093 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-694093 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.126111272s)
--- PASS: TestNoKubernetes/serial/Start (6.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-694093 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-694093 "sudo systemctl is-active --quiet service kubelet": exit status 1 (353.143169ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-694093
E0408 19:17:58.924250  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-694093: (1.235728705s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-694093 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-694093 --driver=docker  --container-runtime=containerd: (7.827836529s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-694093 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-694093 "sudo systemctl is-active --quiet service kubelet": exit status 1 (367.864976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (113.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3671880954 start -p stopped-upgrade-355471 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3671880954 start -p stopped-upgrade-355471 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.885595037s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3671880954 -p stopped-upgrade-355471 stop
E0408 19:20:50.214152  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3671880954 -p stopped-upgrade-355471 stop: (19.914611335s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-355471 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-355471 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (46.321949496s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (113.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-355471
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-355471: (1.373377692s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                    
x
+
TestPause/serial/Start (91.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-248361 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0408 19:23:53.266528  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-248361 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m31.651611934s)
--- PASS: TestPause/serial/Start (91.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-248361 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-248361 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.175791494s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.20s)

                                                
                                    
x
+
TestPause/serial/Pause (1.01s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-248361 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-248361 --alsologtostderr -v=5: (1.006088865s)
--- PASS: TestPause/serial/Pause (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-248361 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-248361 --output=json --layout=cluster: exit status 2 (416.107946ms)

                                                
                                                
-- stdout --
	{"Name":"pause-248361","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-248361","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-248361 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-248361 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-248361 --alsologtostderr -v=5: (1.01853128s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.77s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-248361 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-248361 --alsologtostderr -v=5: (2.774378924s)
--- PASS: TestPause/serial/DeletePaused (2.77s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-248361
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-248361: exit status 1 (15.593481ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-248361: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-637059 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-637059 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (311.722358ms)

                                                
                                                
-- stdout --
	* [false-637059] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:25:02.503402 1023264 out.go:291] Setting OutFile to fd 1 ...
	I0408 19:25:02.503983 1023264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:25:02.504010 1023264 out.go:304] Setting ErrFile to fd 2...
	I0408 19:25:02.504029 1023264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:25:02.504297 1023264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-838483/.minikube/bin
	I0408 19:25:02.504745 1023264 out.go:298] Setting JSON to false
	I0408 19:25:02.505734 1023264 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14847,"bootTime":1712589456,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0408 19:25:02.505829 1023264 start.go:139] virtualization:  
	I0408 19:25:02.510130 1023264 out.go:177] * [false-637059] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0408 19:25:02.512367 1023264 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 19:25:02.512424 1023264 notify.go:220] Checking for updates...
	I0408 19:25:02.514787 1023264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:25:02.516437 1023264 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18585-838483/kubeconfig
	I0408 19:25:02.518793 1023264 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-838483/.minikube
	I0408 19:25:02.520547 1023264 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0408 19:25:02.522405 1023264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:25:02.524936 1023264 config.go:182] Loaded profile config "force-systemd-flag-471739": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:25:02.525037 1023264 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 19:25:02.559422 1023264 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0408 19:25:02.559533 1023264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0408 19:25:02.686794 1023264 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-08 19:25:02.67109884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0408 19:25:02.686987 1023264 docker.go:295] overlay module found
	I0408 19:25:02.690713 1023264 out.go:177] * Using the docker driver based on user configuration
	I0408 19:25:02.692691 1023264 start.go:297] selected driver: docker
	I0408 19:25:02.692757 1023264 start.go:901] validating driver "docker" against <nil>
	I0408 19:25:02.692786 1023264 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:25:02.696484 1023264 out.go:177] 
	W0408 19:25:02.698845 1023264 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0408 19:25:02.701330 1023264 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-637059 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-637059" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-637059" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-637059

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637059"

                                                
                                                
----------------------- debugLogs end: false-637059 [took: 4.695061356s] --------------------------------
helpers_test.go:175: Cleaning up "false-637059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-637059
--- PASS: TestNetworkPlugins/group/false (5.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (173.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-540675 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0408 19:27:58.923867  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-540675 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m53.765724967s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (173.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-537054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-537054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m1.266007356s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-540675 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3bb7e417-1255-40b4-97f0-063503e06f6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3bb7e417-1255-40b4-97f0-063503e06f6f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003472925s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-540675 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-540675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-540675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.142347957s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-540675 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-540675 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-540675 --alsologtostderr -v=3: (12.386204272s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-540675 -n old-k8s-version-540675
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-540675 -n old-k8s-version-540675: exit status 7 (96.363323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-540675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-537054 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [103a728c-e1a9-4e91-8cd8-97ec9336b11b] Pending
helpers_test.go:344: "busybox" [103a728c-e1a9-4e91-8cd8-97ec9336b11b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [103a728c-e1a9-4e91-8cd8-97ec9336b11b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003426605s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-537054 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-537054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-537054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.103742119s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-537054 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-537054 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-537054 --alsologtostderr -v=3: (12.213784968s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-537054 -n default-k8s-diff-port-537054
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-537054 -n default-k8s-diff-port-537054: exit status 7 (122.56182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-537054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-537054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
E0408 19:30:50.214052  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 19:31:01.967839  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 19:32:58.924406  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-537054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m27.777094145s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-537054 -n default-k8s-diff-port-537054
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kf4q7" [60ae5741-96ca-4966-bef5-650db1139a2f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004313146s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kf4q7" [60ae5741-96ca-4966-bef5-650db1139a2f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005140434s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-537054 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-537054 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-537054 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-537054 -n default-k8s-diff-port-537054
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-537054 -n default-k8s-diff-port-537054: exit status 2 (327.55564ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-537054 -n default-k8s-diff-port-537054
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-537054 -n default-k8s-diff-port-537054: exit status 2 (322.752824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-537054 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-537054 -n default-k8s-diff-port-537054
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-537054 -n default-k8s-diff-port-537054
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-160920 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
E0408 19:35:50.213525  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-160920 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m28.941756056s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pvtss" [aed1226d-4528-4ad7-8764-e147df208346] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004397456s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pvtss" [aed1226d-4528-4ad7-8764-e147df208346] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003970511s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-540675 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-540675 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-540675 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-540675 -n old-k8s-version-540675
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-540675 -n old-k8s-version-540675: exit status 2 (320.753016ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-540675 -n old-k8s-version-540675
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-540675 -n old-k8s-version-540675: exit status 2 (381.027684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-540675 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-540675 -n old-k8s-version-540675
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-540675 -n old-k8s-version-540675
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-801717 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-801717 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1: (1m5.670954297s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-160920 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b4d41d8a-455c-4bb6-9447-2fbe760c11dc] Pending
helpers_test.go:344: "busybox" [b4d41d8a-455c-4bb6-9447-2fbe760c11dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b4d41d8a-455c-4bb6-9447-2fbe760c11dc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003495204s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-160920 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-160920 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-160920 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.443282944s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-160920 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-160920 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-160920 --alsologtostderr -v=3: (12.513429664s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-160920 -n embed-certs-160920
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-160920 -n embed-certs-160920: exit status 7 (82.552248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-160920 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (290.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-160920 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-160920 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m50.486120463s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-160920 -n embed-certs-160920
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (290.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-801717 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9d5305de-3eda-4442-b282-0fa0f4b1edce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9d5305de-3eda-4442-b282-0fa0f4b1edce] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00373364s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-801717 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-801717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-801717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.239825827s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-801717 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-801717 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-801717 --alsologtostderr -v=3: (12.286466874s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-801717 -n no-preload-801717
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-801717 -n no-preload-801717: exit status 7 (77.045532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-801717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (296.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-801717 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1
E0408 19:37:58.924168  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 19:39:22.549938  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:22.555298  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:22.565651  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:22.586036  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:22.626379  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:22.706702  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:22.866870  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:23.187431  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:23.828034  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:25.108281  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:27.668731  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:32.789301  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:39:43.030483  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:40:03.511346  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:40:05.981705  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:05.986920  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:05.997627  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:06.018679  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:06.058967  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:06.139206  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:06.299758  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:06.620310  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:07.261292  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:08.541941  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:11.102140  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:16.222828  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:26.463445  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:33.267119  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 19:40:44.472410  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
E0408 19:40:46.943647  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:40:50.213709  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 19:41:27.904250  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-801717 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1: (4m56.038349569s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-801717 -n no-preload-801717
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (296.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-p787s" [60d4f3c5-9207-4185-bd0b-9c64f253fa95] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004298198s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-p787s" [60d4f3c5-9207-4185-bd0b-9c64f253fa95] Running
E0408 19:42:06.392655  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003820321s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-160920 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-160920 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-160920 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-160920 --alsologtostderr -v=1: (1.032834045s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-160920 -n embed-certs-160920
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-160920 -n embed-certs-160920: exit status 2 (416.532656ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-160920 -n embed-certs-160920
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-160920 -n embed-certs-160920: exit status 2 (429.734118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-160920 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-160920 -n embed-certs-160920
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-160920 -n embed-certs-160920
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-534723 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-534723 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1: (46.09487304s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-m6dnd" [34202ccb-6e8d-498a-8931-55cb44764037] Running
E0408 19:42:49.825359  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003930797s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-m6dnd" [34202ccb-6e8d-498a-8931-55cb44764037] Running
E0408 19:42:58.924492  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004332462s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-801717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-801717 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-801717 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-801717 --alsologtostderr -v=1: (1.126910531s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-801717 -n no-preload-801717
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-801717 -n no-preload-801717: exit status 2 (396.891171ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-801717 -n no-preload-801717
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-801717 -n no-preload-801717: exit status 2 (563.377122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-801717 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-801717 --alsologtostderr -v=1: (1.222001988s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-801717 -n no-preload-801717
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-801717 -n no-preload-801717
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-534723 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-534723 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.424565914s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-534723 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-534723 --alsologtostderr -v=3: (1.446326911s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-534723 -n newest-cni-534723
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-534723 -n newest-cni-534723: exit status 7 (78.581953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-534723 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-534723 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-534723 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1: (20.256142s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-534723 -n newest-cni-534723
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m33.053638791s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-534723 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-534723 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-534723 --alsologtostderr -v=1: (1.010404077s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-534723 -n newest-cni-534723
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-534723 -n newest-cni-534723: exit status 2 (404.042103ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-534723 -n newest-cni-534723
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-534723 -n newest-cni-534723: exit status 2 (370.257862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-534723 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-534723 -n newest-cni-534723
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-534723 -n newest-cni-534723
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.87s)
E0408 19:49:41.618066  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:49:41.623858  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:49:41.634098  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:49:41.654357  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:49:41.694639  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:49:41.775509  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:49:41.936049  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:49:42.256551  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:49:42.897092  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:49:44.177601  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:49:46.737982  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:49:51.858614  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:50:02.099137  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:50:05.981240  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/default-k8s-diff-port-537054/client.crt: no such file or directory
E0408 19:50:11.015740  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:11.021051  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:11.031387  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:11.051664  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:11.091918  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:11.172285  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:11.332652  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:11.652794  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:12.293716  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:13.574549  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:13.740725  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
E0408 19:50:16.134984  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:21.255659  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
E0408 19:50:22.579521  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/auto-637059/client.crt: no such file or directory
E0408 19:50:31.496799  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (95.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0408 19:44:22.550174  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m35.624403282s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (95.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-637059 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-637059 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pgjlk" [abb31143-635e-40f8-86da-8eed65537ae6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pgjlk" [abb31143-635e-40f8-86da-8eed65537ae6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004603069s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-637059 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6ktf8" [91249c38-16ad-4173-90de-0bf2d7a14335] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004731777s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m21.100440387s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-637059 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-637059 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-slr8w" [67444c21-b76b-42d9-8190-4cf90b9a9946] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-slr8w" [67444c21-b76b-42d9-8190-4cf90b9a9946] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004509338s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-637059 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m2.10547032s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-pj88s" [2ad13ee8-590a-4faf-b51c-9e2f34576621] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006878036s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-637059 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-637059 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-89c6v" [d9270e37-8fa2-48a2-8749-bb1c70d21624] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-89c6v" [d9270e37-8fa2-48a2-8749-bb1c70d21624] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004363512s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-637059 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-637059 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-637059 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m6x9p" [a3d1905d-9538-47e0-a5e2-eeaf25283baf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-m6x9p" [a3d1905d-9538-47e0-a5e2-eeaf25283baf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004534349s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-637059 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0408 19:47:29.896650  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
E0408 19:47:29.902470  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
E0408 19:47:29.913223  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
E0408 19:47:29.933469  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
E0408 19:47:29.973667  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
E0408 19:47:30.053890  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
E0408 19:47:30.214314  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
E0408 19:47:30.534735  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m33.935320333s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0408 19:47:35.017871  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
E0408 19:47:40.138924  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
E0408 19:47:41.968558  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 19:47:50.379711  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
E0408 19:47:58.924013  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/functional-435105/client.crt: no such file or directory
E0408 19:48:10.860063  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m4.14380913s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-28xpv" [a4656cbe-6bf3-4066-ac43-d51a0fe3850b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00462211s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-637059 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-637059 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f627w" [ae4ad40d-1859-45c2-bf6f-e17f8d5d0aa5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f627w" [ae4ad40d-1859-45c2-bf6f-e17f8d5d0aa5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004080314s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-637059 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-637059 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2kthn" [a5e918ab-8d04-48b7-a1e0-9ff926c58029] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0408 19:48:51.820507  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/no-preload-801717/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2kthn" [a5e918ab-8d04-48b7-a1e0-9ff926c58029] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003593113s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-637059 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-637059 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0408 19:49:22.550451  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/old-k8s-version-540675/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-637059 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m25.305898542s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-637059 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-637059 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v56rc" [a2b6f00a-5a07-42c7-a92f-0170149cddb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v56rc" [a2b6f00a-5a07-42c7-a92f-0170149cddb8] Running
E0408 19:50:50.213982  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/addons-038955/client.crt: no such file or directory
E0408 19:50:51.977565  843900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/kindnet-637059/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003915713s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-637059 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-637059 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-887640 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-887640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-887640
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-228663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-228663
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-637059 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-637059" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-637059" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-637059

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637059"

                                                
                                                
----------------------- debugLogs end: kubenet-637059 [took: 6.320016785s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-637059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-637059
--- SKIP: TestNetworkPlugins/group/kubenet (6.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-637059 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-637059" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18585-838483/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Apr 2024 19:25:07 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-471739
contexts:
- context:
cluster: force-systemd-flag-471739
extensions:
- extension:
last-update: Mon, 08 Apr 2024 19:25:07 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: force-systemd-flag-471739
name: force-systemd-flag-471739
current-context: force-systemd-flag-471739
kind: Config
preferences: {}
users:
- name: force-systemd-flag-471739
user:
client-certificate: /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/force-systemd-flag-471739/client.crt
client-key: /home/jenkins/minikube-integration/18585-838483/.minikube/profiles/force-systemd-flag-471739/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-637059

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-637059" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637059"

                                                
                                                
----------------------- debugLogs end: cilium-637059 [took: 5.400853262s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-637059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-637059
--- SKIP: TestNetworkPlugins/group/cilium (5.56s)

                                                
                                    
Copied to clipboard