Test Report: Docker_Linux_containerd_arm64 18239

                    
                      59a59c81047135cbdfd2a30078659de6ff7ddc30:2024-03-07:33453
                    
                

Test fail (7/335)

x
+
TestAddons/parallel/Ingress (39.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-678595 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-678595 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-678595 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ae03127a-427c-4d42-9f2e-260d52b6299a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ae03127a-427c-4d42-9f2e-260d52b6299a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003420176s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-678595 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.072941403s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-678595 addons disable ingress-dns --alsologtostderr -v=1: (1.569344304s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-678595 addons disable ingress --alsologtostderr -v=1: (8.109154383s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-678595
helpers_test.go:235: (dbg) docker inspect addons-678595:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "85cdad87b2254283c063d2072ee9ac5d0b3dc84c163df9ab18811331087929d3",
	        "Created": "2024-03-07T18:44:24.125443191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 564854,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-07T18:44:24.475616994Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/85cdad87b2254283c063d2072ee9ac5d0b3dc84c163df9ab18811331087929d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/85cdad87b2254283c063d2072ee9ac5d0b3dc84c163df9ab18811331087929d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/85cdad87b2254283c063d2072ee9ac5d0b3dc84c163df9ab18811331087929d3/hosts",
	        "LogPath": "/var/lib/docker/containers/85cdad87b2254283c063d2072ee9ac5d0b3dc84c163df9ab18811331087929d3/85cdad87b2254283c063d2072ee9ac5d0b3dc84c163df9ab18811331087929d3-json.log",
	        "Name": "/addons-678595",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-678595:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-678595",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/41f230be6a35413ce14153425fd6bef04ab6e266d40e618b6e159325a0bb13c4-init/diff:/var/lib/docker/overlay2/0f2c2bc9ebcb6a090c4ed5f3df98eb2fb852fa3a78be98cc34cd75b1870e6d76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41f230be6a35413ce14153425fd6bef04ab6e266d40e618b6e159325a0bb13c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41f230be6a35413ce14153425fd6bef04ab6e266d40e618b6e159325a0bb13c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41f230be6a35413ce14153425fd6bef04ab6e266d40e618b6e159325a0bb13c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-678595",
	                "Source": "/var/lib/docker/volumes/addons-678595/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-678595",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-678595",
	                "name.minikube.sigs.k8s.io": "addons-678595",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4949ab3fae128b74645f0e242f6737e528213832f29e7811f78dbea2bfb37650",
	            "SandboxKey": "/var/run/docker/netns/4949ab3fae12",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-678595": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "85cdad87b225",
	                        "addons-678595"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "d7f2a11e0ae357eca594ee7c25c2afec0b1029dfded81ef2c7dfc0bc99615c1f",
	                    "EndpointID": "44352f1ee39c3578c89c76482c14777ef29104a30f8bf6a5fb5b7836cb01fb5e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-678595",
	                        "85cdad87b225"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-678595 -n addons-678595
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-678595 logs -n 25: (1.491236804s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| delete  | -p download-only-992905              | download-only-992905   | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| start   | -o=json --download-only              | download-only-843119   | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC |                     |
	|         | -p download-only-843119              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| delete  | -p download-only-843119              | download-only-843119   | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| delete  | -p download-only-584237              | download-only-584237   | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| delete  | -p download-only-992905              | download-only-992905   | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| delete  | -p download-only-843119              | download-only-843119   | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| start   | --download-only -p                   | download-docker-197042 | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC |                     |
	|         | download-docker-197042               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-197042            | download-docker-197042 | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:44 UTC |
	| start   | --download-only -p                   | binary-mirror-512824   | jenkins | v1.32.0 | 07 Mar 24 18:44 UTC |                     |
	|         | binary-mirror-512824                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36839               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-512824              | binary-mirror-512824   | jenkins | v1.32.0 | 07 Mar 24 18:44 UTC | 07 Mar 24 18:44 UTC |
	| addons  | enable dashboard -p                  | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:44 UTC |                     |
	|         | addons-678595                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:44 UTC |                     |
	|         | addons-678595                        |                        |         |         |                     |                     |
	| start   | -p addons-678595 --wait=true         | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:44 UTC | 07 Mar 24 18:46 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-678595 ip                     | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:46 UTC | 07 Mar 24 18:46 UTC |
	| addons  | addons-678595 addons disable         | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:46 UTC | 07 Mar 24 18:46 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-678595 addons                 | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:46 UTC | 07 Mar 24 18:46 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:46 UTC | 07 Mar 24 18:46 UTC |
	|         | addons-678595                        |                        |         |         |                     |                     |
	| ssh     | addons-678595 ssh curl -s            | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:46 UTC | 07 Mar 24 18:46 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-678595 ip                     | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:46 UTC | 07 Mar 24 18:46 UTC |
	| addons  | addons-678595 addons                 | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:47 UTC | 07 Mar 24 18:47 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-678595 addons disable         | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:47 UTC | 07 Mar 24 18:47 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-678595 addons disable         | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:47 UTC | 07 Mar 24 18:47 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-678595 addons                 | addons-678595          | jenkins | v1.32.0 | 07 Mar 24 18:47 UTC | 07 Mar 24 18:47 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 18:44:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:44:01.060653  564395 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:44:01.060808  564395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:44:01.060821  564395 out.go:304] Setting ErrFile to fd 2...
	I0307 18:44:01.060853  564395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:44:01.061133  564395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 18:44:01.061632  564395 out.go:298] Setting JSON to false
	I0307 18:44:01.062551  564395 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8785,"bootTime":1709828256,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 18:44:01.062623  564395 start.go:139] virtualization:  
	I0307 18:44:01.065391  564395 out.go:177] * [addons-678595] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 18:44:01.068098  564395 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 18:44:01.069966  564395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:44:01.068247  564395 notify.go:220] Checking for updates...
	I0307 18:44:01.071989  564395 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 18:44:01.073910  564395 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	I0307 18:44:01.076245  564395 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 18:44:01.077877  564395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:44:01.080226  564395 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:44:01.101259  564395 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 18:44:01.101409  564395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:44:01.173037  564395 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 18:44:01.162847934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:44:01.173149  564395 docker.go:295] overlay module found
	I0307 18:44:01.175691  564395 out.go:177] * Using the docker driver based on user configuration
	I0307 18:44:01.177639  564395 start.go:297] selected driver: docker
	I0307 18:44:01.177662  564395 start.go:901] validating driver "docker" against <nil>
	I0307 18:44:01.177684  564395 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:44:01.178377  564395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:44:01.230987  564395 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 18:44:01.222115854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:44:01.231188  564395 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 18:44:01.231425  564395 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 18:44:01.233815  564395 out.go:177] * Using Docker driver with root privileges
	I0307 18:44:01.236027  564395 cni.go:84] Creating CNI manager for ""
	I0307 18:44:01.236052  564395 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 18:44:01.236071  564395 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 18:44:01.236172  564395 start.go:340] cluster config:
	{Name:addons-678595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-678595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:44:01.238905  564395 out.go:177] * Starting "addons-678595" primary control-plane node in "addons-678595" cluster
	I0307 18:44:01.241637  564395 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 18:44:01.244147  564395 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 18:44:01.246274  564395 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 18:44:01.246347  564395 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0307 18:44:01.246366  564395 cache.go:56] Caching tarball of preloaded images
	I0307 18:44:01.246371  564395 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 18:44:01.246458  564395 preload.go:173] Found /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 18:44:01.246468  564395 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0307 18:44:01.246819  564395 profile.go:142] Saving config to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/config.json ...
	I0307 18:44:01.246849  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/config.json: {Name:mk58e82da60d54848d05497437937ff4f208ea29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:01.261070  564395 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 18:44:01.261203  564395 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 18:44:01.261223  564395 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 18:44:01.261228  564395 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 18:44:01.261236  564395 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 18:44:01.261241  564395 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from local cache
	I0307 18:44:17.032053  564395 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from cached tarball
	I0307 18:44:17.032095  564395 cache.go:194] Successfully downloaded all kic artifacts
	I0307 18:44:17.032142  564395 start.go:360] acquireMachinesLock for addons-678595: {Name:mk4433cad09b5b0ea2be413b37a3b3fdde711a79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:44:17.032291  564395 start.go:364] duration metric: took 120.188µs to acquireMachinesLock for "addons-678595"
	I0307 18:44:17.032329  564395 start.go:93] Provisioning new machine with config: &{Name:addons-678595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-678595 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 18:44:17.032439  564395 start.go:125] createHost starting for "" (driver="docker")
	I0307 18:44:17.035145  564395 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0307 18:44:17.035404  564395 start.go:159] libmachine.API.Create for "addons-678595" (driver="docker")
	I0307 18:44:17.035441  564395 client.go:168] LocalClient.Create starting
	I0307 18:44:17.035581  564395 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem
	I0307 18:44:17.253269  564395 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/cert.pem
	I0307 18:44:17.746608  564395 cli_runner.go:164] Run: docker network inspect addons-678595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 18:44:17.761488  564395 cli_runner.go:211] docker network inspect addons-678595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 18:44:17.761597  564395 network_create.go:281] running [docker network inspect addons-678595] to gather additional debugging logs...
	I0307 18:44:17.761621  564395 cli_runner.go:164] Run: docker network inspect addons-678595
	W0307 18:44:17.777319  564395 cli_runner.go:211] docker network inspect addons-678595 returned with exit code 1
	I0307 18:44:17.777356  564395 network_create.go:284] error running [docker network inspect addons-678595]: docker network inspect addons-678595: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-678595 not found
	I0307 18:44:17.777381  564395 network_create.go:286] output of [docker network inspect addons-678595]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-678595 not found
	
	** /stderr **
	I0307 18:44:17.777488  564395 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 18:44:17.792986  564395 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000b53b80}
	I0307 18:44:17.793033  564395 network_create.go:124] attempt to create docker network addons-678595 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0307 18:44:17.793109  564395 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-678595 addons-678595
	I0307 18:44:17.855002  564395 network_create.go:108] docker network addons-678595 192.168.49.0/24 created
	I0307 18:44:17.855037  564395 kic.go:121] calculated static IP "192.168.49.2" for the "addons-678595" container
	I0307 18:44:17.855129  564395 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 18:44:17.870073  564395 cli_runner.go:164] Run: docker volume create addons-678595 --label name.minikube.sigs.k8s.io=addons-678595 --label created_by.minikube.sigs.k8s.io=true
	I0307 18:44:17.886236  564395 oci.go:103] Successfully created a docker volume addons-678595
	I0307 18:44:17.886325  564395 cli_runner.go:164] Run: docker run --rm --name addons-678595-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-678595 --entrypoint /usr/bin/test -v addons-678595:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 18:44:19.834947  564395 cli_runner.go:217] Completed: docker run --rm --name addons-678595-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-678595 --entrypoint /usr/bin/test -v addons-678595:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (1.948582416s)
	I0307 18:44:19.834983  564395 oci.go:107] Successfully prepared a docker volume addons-678595
	I0307 18:44:19.835021  564395 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 18:44:19.835044  564395 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 18:44:19.835123  564395 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-678595:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 18:44:24.046580  564395 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-678595:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (4.211398361s)
	I0307 18:44:24.046616  564395 kic.go:203] duration metric: took 4.211567713s to extract preloaded images to volume ...
	W0307 18:44:24.046762  564395 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0307 18:44:24.046876  564395 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0307 18:44:24.110996  564395 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-678595 --name addons-678595 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-678595 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-678595 --network addons-678595 --ip 192.168.49.2 --volume addons-678595:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0307 18:44:24.484359  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Running}}
	I0307 18:44:24.505699  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:44:24.528529  564395 cli_runner.go:164] Run: docker exec addons-678595 stat /var/lib/dpkg/alternatives/iptables
	I0307 18:44:24.596912  564395 oci.go:144] the created container "addons-678595" has a running status.
	I0307 18:44:24.596937  564395 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa...
	I0307 18:44:25.516327  564395 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0307 18:44:25.535876  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:44:25.555145  564395 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0307 18:44:25.555164  564395 kic_runner.go:114] Args: [docker exec --privileged addons-678595 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0307 18:44:25.620443  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:44:25.643227  564395 machine.go:94] provisionDockerMachine start ...
	I0307 18:44:25.643315  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:44:25.659586  564395 main.go:141] libmachine: Using SSH client type: native
	I0307 18:44:25.659836  564395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I0307 18:44:25.659845  564395 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 18:44:25.788834  564395 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-678595
	
	I0307 18:44:25.788859  564395 ubuntu.go:169] provisioning hostname "addons-678595"
	I0307 18:44:25.788930  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:44:25.805054  564395 main.go:141] libmachine: Using SSH client type: native
	I0307 18:44:25.805308  564395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I0307 18:44:25.805325  564395 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-678595 && echo "addons-678595" | sudo tee /etc/hostname
	I0307 18:44:25.948729  564395 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-678595
	
	I0307 18:44:25.948811  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:44:25.965229  564395 main.go:141] libmachine: Using SSH client type: native
	I0307 18:44:25.965491  564395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I0307 18:44:25.965534  564395 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-678595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-678595/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-678595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 18:44:26.093369  564395 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 18:44:26.093410  564395 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18239-558171/.minikube CaCertPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18239-558171/.minikube}
	I0307 18:44:26.093430  564395 ubuntu.go:177] setting up certificates
	I0307 18:44:26.093441  564395 provision.go:84] configureAuth start
	I0307 18:44:26.093507  564395 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-678595
	I0307 18:44:26.109005  564395 provision.go:143] copyHostCerts
	I0307 18:44:26.109092  564395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18239-558171/.minikube/cert.pem (1123 bytes)
	I0307 18:44:26.109254  564395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18239-558171/.minikube/key.pem (1675 bytes)
	I0307 18:44:26.109327  564395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18239-558171/.minikube/ca.pem (1082 bytes)
	I0307 18:44:26.109396  564395 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18239-558171/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca-key.pem org=jenkins.addons-678595 san=[127.0.0.1 192.168.49.2 addons-678595 localhost minikube]
	I0307 18:44:26.409096  564395 provision.go:177] copyRemoteCerts
	I0307 18:44:26.409163  564395 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 18:44:26.409210  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:44:26.428406  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:44:26.522655  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 18:44:26.546735  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 18:44:26.570431  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 18:44:26.594731  564395 provision.go:87] duration metric: took 501.261798ms to configureAuth
	I0307 18:44:26.594802  564395 ubuntu.go:193] setting minikube options for container-runtime
	I0307 18:44:26.595030  564395 config.go:182] Loaded profile config "addons-678595": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 18:44:26.595047  564395 machine.go:97] duration metric: took 951.801392ms to provisionDockerMachine
	I0307 18:44:26.595055  564395 client.go:171] duration metric: took 9.559604777s to LocalClient.Create
	I0307 18:44:26.595089  564395 start.go:167] duration metric: took 9.559685498s to libmachine.API.Create "addons-678595"
	I0307 18:44:26.595105  564395 start.go:293] postStartSetup for "addons-678595" (driver="docker")
	I0307 18:44:26.595114  564395 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 18:44:26.595189  564395 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 18:44:26.595232  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:44:26.610740  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:44:26.702651  564395 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 18:44:26.705659  564395 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 18:44:26.705696  564395 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 18:44:26.705708  564395 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 18:44:26.705715  564395 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0307 18:44:26.705726  564395 filesync.go:126] Scanning /home/jenkins/minikube-integration/18239-558171/.minikube/addons for local assets ...
	I0307 18:44:26.705793  564395 filesync.go:126] Scanning /home/jenkins/minikube-integration/18239-558171/.minikube/files for local assets ...
	I0307 18:44:26.705822  564395 start.go:296] duration metric: took 110.712169ms for postStartSetup
	I0307 18:44:26.706134  564395 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-678595
	I0307 18:44:26.723049  564395 profile.go:142] Saving config to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/config.json ...
	I0307 18:44:26.723347  564395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:44:26.723400  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:44:26.738691  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:44:26.826979  564395 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 18:44:26.831721  564395 start.go:128] duration metric: took 9.79926705s to createHost
	I0307 18:44:26.831758  564395 start.go:83] releasing machines lock for "addons-678595", held for 9.799454002s
	I0307 18:44:26.831859  564395 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-678595
	I0307 18:44:26.848328  564395 ssh_runner.go:195] Run: cat /version.json
	I0307 18:44:26.848379  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:44:26.848394  564395 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 18:44:26.848456  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:44:26.865372  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:44:26.874896  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:44:26.952842  564395 ssh_runner.go:195] Run: systemctl --version
	I0307 18:44:27.065967  564395 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 18:44:27.070532  564395 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 18:44:27.098041  564395 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 18:44:27.098122  564395 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 18:44:27.128151  564395 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0307 18:44:27.128224  564395 start.go:494] detecting cgroup driver to use...
	I0307 18:44:27.128279  564395 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 18:44:27.128361  564395 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 18:44:27.140988  564395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 18:44:27.152582  564395 docker.go:217] disabling cri-docker service (if available) ...
	I0307 18:44:27.152687  564395 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0307 18:44:27.165962  564395 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0307 18:44:27.179881  564395 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0307 18:44:27.260070  564395 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0307 18:44:27.359252  564395 docker.go:233] disabling docker service ...
	I0307 18:44:27.359320  564395 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0307 18:44:27.379335  564395 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0307 18:44:27.390480  564395 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0307 18:44:27.479962  564395 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0307 18:44:27.571677  564395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0307 18:44:27.583308  564395 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:44:27.601435  564395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 18:44:27.611986  564395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 18:44:27.622406  564395 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 18:44:27.622517  564395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 18:44:27.632928  564395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:44:27.644123  564395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 18:44:27.655328  564395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:44:27.666152  564395 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 18:44:27.675943  564395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 18:44:27.686130  564395 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 18:44:27.694608  564395 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 18:44:27.702665  564395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:44:27.789690  564395 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 18:44:27.928590  564395 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0307 18:44:27.928718  564395 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 18:44:27.932472  564395 start.go:562] Will wait 60s for crictl version
	I0307 18:44:27.932592  564395 ssh_runner.go:195] Run: which crictl
	I0307 18:44:27.935995  564395 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 18:44:27.974784  564395 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0307 18:44:27.974909  564395 ssh_runner.go:195] Run: containerd --version
	I0307 18:44:27.996153  564395 ssh_runner.go:195] Run: containerd --version
	I0307 18:44:28.023432  564395 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0307 18:44:28.026386  564395 cli_runner.go:164] Run: docker network inspect addons-678595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 18:44:28.042049  564395 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0307 18:44:28.045983  564395 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:44:28.057097  564395 kubeadm.go:877] updating cluster {Name:addons-678595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-678595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 18:44:28.057225  564395 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 18:44:28.057344  564395 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 18:44:28.095186  564395 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 18:44:28.095211  564395 containerd.go:519] Images already preloaded, skipping extraction
	I0307 18:44:28.095271  564395 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 18:44:28.131020  564395 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 18:44:28.131042  564395 cache_images.go:84] Images are preloaded, skipping loading
	I0307 18:44:28.131051  564395 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 containerd true true} ...
	I0307 18:44:28.131155  564395 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-678595 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-678595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 18:44:28.131221  564395 ssh_runner.go:195] Run: sudo crictl info
	I0307 18:44:28.166670  564395 cni.go:84] Creating CNI manager for ""
	I0307 18:44:28.166696  564395 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 18:44:28.166708  564395 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 18:44:28.166729  564395 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-678595 NodeName:addons-678595 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 18:44:28.166856  564395 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-678595"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 18:44:28.166928  564395 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 18:44:28.175782  564395 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 18:44:28.175854  564395 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 18:44:28.184352  564395 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0307 18:44:28.202492  564395 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 18:44:28.220367  564395 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0307 18:44:28.238017  564395 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0307 18:44:28.241428  564395 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:44:28.252166  564395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:44:28.337659  564395 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 18:44:28.351165  564395 certs.go:68] Setting up /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595 for IP: 192.168.49.2
	I0307 18:44:28.351189  564395 certs.go:194] generating shared ca certs ...
	I0307 18:44:28.351214  564395 certs.go:226] acquiring lock for ca certs: {Name:mke14792b1616e9503645c7147aed38043ea5d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:28.351351  564395 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18239-558171/.minikube/ca.key
	I0307 18:44:28.708150  564395 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18239-558171/.minikube/ca.crt ...
	I0307 18:44:28.708186  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/ca.crt: {Name:mked35caa8a6a67f5c2bd0721fc93f757e0bd6a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:28.708874  564395 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18239-558171/.minikube/ca.key ...
	I0307 18:44:28.708892  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/ca.key: {Name:mkea4bb2603373938f9ab0855017d6ade7628470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:28.709005  564395 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.key
	I0307 18:44:30.259059  564395 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.crt ...
	I0307 18:44:30.259091  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.crt: {Name:mk3bcd0de96196199a8922adee71f0b95122c653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:30.259984  564395 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.key ...
	I0307 18:44:30.260001  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.key: {Name:mk362e154a5367de6c13e1fffdccef572fa53e4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:30.260096  564395 certs.go:256] generating profile certs ...
	I0307 18:44:30.260161  564395 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.key
	I0307 18:44:30.260179  564395 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt with IP's: []
	I0307 18:44:30.752935  564395 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt ...
	I0307 18:44:30.752970  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: {Name:mk2afbc3d8501ad8980aaa187218d2a638e05197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:30.753167  564395 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.key ...
	I0307 18:44:30.753181  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.key: {Name:mkfa12ff4d7e37786140696b6beea80998badde9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:30.753277  564395 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.key.a6015da8
	I0307 18:44:30.753298  564395 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.crt.a6015da8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0307 18:44:31.381093  564395 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.crt.a6015da8 ...
	I0307 18:44:31.381124  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.crt.a6015da8: {Name:mkcd479120b7e5c6bc0e64d96f438a38a4b3a73f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:31.381313  564395 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.key.a6015da8 ...
	I0307 18:44:31.381328  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.key.a6015da8: {Name:mk30e49f46c546f61e3d4eb16c5b284783cf6f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:31.381415  564395 certs.go:381] copying /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.crt.a6015da8 -> /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.crt
	I0307 18:44:31.381505  564395 certs.go:385] copying /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.key.a6015da8 -> /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.key
	I0307 18:44:31.381579  564395 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/proxy-client.key
	I0307 18:44:31.381601  564395 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/proxy-client.crt with IP's: []
	I0307 18:44:31.638526  564395 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/proxy-client.crt ...
	I0307 18:44:31.638561  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/proxy-client.crt: {Name:mk78f1c8631f07ad415dae69dfc9811009a234b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:31.639368  564395 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/proxy-client.key ...
	I0307 18:44:31.639394  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/proxy-client.key: {Name:mk173c17d380fb9f9b5c7ed5f97216faef1864d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:44:31.640634  564395 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 18:44:31.640683  564395 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem (1082 bytes)
	I0307 18:44:31.640712  564395 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/cert.pem (1123 bytes)
	I0307 18:44:31.640741  564395 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/key.pem (1675 bytes)
	I0307 18:44:31.641355  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 18:44:31.665870  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 18:44:31.690386  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 18:44:31.714833  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 18:44:31.739564  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0307 18:44:31.763043  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 18:44:31.786435  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 18:44:31.810168  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 18:44:31.833453  564395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 18:44:31.857426  564395 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 18:44:31.874957  564395 ssh_runner.go:195] Run: openssl version
	I0307 18:44:31.880319  564395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 18:44:31.889661  564395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:44:31.893025  564395 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:44:31.893086  564395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:44:31.899948  564395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 18:44:31.909063  564395 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 18:44:31.912334  564395 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 18:44:31.912382  564395 kubeadm.go:391] StartCluster: {Name:addons-678595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-678595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:44:31.912480  564395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 18:44:31.912544  564395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 18:44:31.950479  564395 cri.go:89] found id: ""
	I0307 18:44:31.950549  564395 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 18:44:31.959185  564395 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 18:44:31.967741  564395 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0307 18:44:31.967807  564395 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 18:44:31.976468  564395 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:44:31.976534  564395 kubeadm.go:156] found existing configuration files:
	
	I0307 18:44:31.976607  564395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0307 18:44:31.985072  564395 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 18:44:31.985162  564395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 18:44:31.993589  564395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0307 18:44:32.004092  564395 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 18:44:32.004171  564395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 18:44:32.013602  564395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0307 18:44:32.022654  564395 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 18:44:32.022731  564395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 18:44:32.031982  564395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0307 18:44:32.041326  564395 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 18:44:32.041436  564395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 18:44:32.050681  564395 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0307 18:44:32.168080  564395 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0307 18:44:32.246825  564395 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 18:44:48.046822  564395 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0307 18:44:48.046880  564395 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 18:44:48.046976  564395 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0307 18:44:48.047030  564395 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0307 18:44:48.047064  564395 kubeadm.go:309] OS: Linux
	I0307 18:44:48.047111  564395 kubeadm.go:309] CGROUPS_CPU: enabled
	I0307 18:44:48.047157  564395 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0307 18:44:48.047217  564395 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0307 18:44:48.047265  564395 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0307 18:44:48.047311  564395 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0307 18:44:48.047360  564395 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0307 18:44:48.047406  564395 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0307 18:44:48.047452  564395 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0307 18:44:48.047496  564395 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0307 18:44:48.047566  564395 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 18:44:48.047656  564395 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 18:44:48.047745  564395 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 18:44:48.047805  564395 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 18:44:48.049680  564395 out.go:204]   - Generating certificates and keys ...
	I0307 18:44:48.049778  564395 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 18:44:48.049842  564395 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 18:44:48.049909  564395 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 18:44:48.049965  564395 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0307 18:44:48.050022  564395 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0307 18:44:48.050071  564395 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0307 18:44:48.050122  564395 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0307 18:44:48.050235  564395 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-678595 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0307 18:44:48.050285  564395 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0307 18:44:48.050393  564395 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-678595 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0307 18:44:48.050455  564395 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 18:44:48.050516  564395 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 18:44:48.050558  564395 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0307 18:44:48.050617  564395 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 18:44:48.050666  564395 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 18:44:48.050718  564395 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 18:44:48.050782  564395 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 18:44:48.050834  564395 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 18:44:48.050912  564395 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 18:44:48.050975  564395 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 18:44:48.053022  564395 out.go:204]   - Booting up control plane ...
	I0307 18:44:48.053223  564395 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 18:44:48.053344  564395 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 18:44:48.053451  564395 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 18:44:48.053647  564395 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 18:44:48.053776  564395 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 18:44:48.053863  564395 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 18:44:48.054074  564395 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 18:44:48.054208  564395 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.503477 seconds
	I0307 18:44:48.054356  564395 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 18:44:48.054489  564395 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 18:44:48.054549  564395 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 18:44:48.054735  564395 kubeadm.go:309] [mark-control-plane] Marking the node addons-678595 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 18:44:48.054793  564395 kubeadm.go:309] [bootstrap-token] Using token: 9ij34i.01zpixh4bhhkp5ud
	I0307 18:44:48.056794  564395 out.go:204]   - Configuring RBAC rules ...
	I0307 18:44:48.056905  564395 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 18:44:48.056985  564395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 18:44:48.057131  564395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 18:44:48.057262  564395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 18:44:48.057373  564395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 18:44:48.057469  564395 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 18:44:48.057699  564395 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 18:44:48.057747  564395 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 18:44:48.057791  564395 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 18:44:48.057795  564395 kubeadm.go:309] 
	I0307 18:44:48.057858  564395 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 18:44:48.057862  564395 kubeadm.go:309] 
	I0307 18:44:48.057935  564395 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 18:44:48.057940  564395 kubeadm.go:309] 
	I0307 18:44:48.057964  564395 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 18:44:48.058020  564395 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 18:44:48.058068  564395 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 18:44:48.058072  564395 kubeadm.go:309] 
	I0307 18:44:48.058124  564395 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 18:44:48.058127  564395 kubeadm.go:309] 
	I0307 18:44:48.058173  564395 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 18:44:48.058177  564395 kubeadm.go:309] 
	I0307 18:44:48.058230  564395 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 18:44:48.058301  564395 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 18:44:48.058366  564395 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 18:44:48.058370  564395 kubeadm.go:309] 
	I0307 18:44:48.058461  564395 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 18:44:48.058536  564395 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 18:44:48.058540  564395 kubeadm.go:309] 
	I0307 18:44:48.058623  564395 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9ij34i.01zpixh4bhhkp5ud \
	I0307 18:44:48.058721  564395 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:17caf06007e0764138c1f585dfe115b801f228bdeee3cba3ea5bff5870a6e807 \
	I0307 18:44:48.058742  564395 kubeadm.go:309] 	--control-plane 
	I0307 18:44:48.058747  564395 kubeadm.go:309] 
	I0307 18:44:48.058828  564395 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 18:44:48.058832  564395 kubeadm.go:309] 
	I0307 18:44:48.058910  564395 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9ij34i.01zpixh4bhhkp5ud \
	I0307 18:44:48.059022  564395 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:17caf06007e0764138c1f585dfe115b801f228bdeee3cba3ea5bff5870a6e807 
	I0307 18:44:48.059031  564395 cni.go:84] Creating CNI manager for ""
	I0307 18:44:48.059038  564395 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 18:44:48.062896  564395 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0307 18:44:48.064962  564395 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 18:44:48.074097  564395 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0307 18:44:48.074116  564395 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0307 18:44:48.099652  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 18:44:49.139419  564395 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.039731267s)
	I0307 18:44:49.139454  564395 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 18:44:49.139572  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:49.139650  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-678595 minikube.k8s.io/updated_at=2024_03_07T18_44_49_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=526fad16cb967ea3a5b243df32efb88cb58b81ec minikube.k8s.io/name=addons-678595 minikube.k8s.io/primary=true
	I0307 18:44:49.287799  564395 ops.go:34] apiserver oom_adj: -16
	I0307 18:44:49.287900  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:49.788245  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:50.288011  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:50.789048  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:51.289043  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:51.788938  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:52.288256  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:52.788339  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:53.288797  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:53.788226  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:54.288068  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:54.788070  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:55.288127  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:55.788964  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:56.288066  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:56.788776  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:57.288720  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:57.788253  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:58.288655  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:58.788261  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:59.288984  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:44:59.788564  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:45:00.288143  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:45:00.788590  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:45:01.288608  564395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:45:01.477636  564395 kubeadm.go:1106] duration metric: took 12.338106901s to wait for elevateKubeSystemPrivileges
	W0307 18:45:01.477674  564395 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 18:45:01.477682  564395 kubeadm.go:393] duration metric: took 29.565304579s to StartCluster
	I0307 18:45:01.477703  564395 settings.go:142] acquiring lock: {Name:mkebfa804b6349436c6d99572f0f0da9cb5ad1f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:45:01.478242  564395 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 18:45:01.478698  564395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/kubeconfig: {Name:mk6862a934ece36327360ff645a33ee6e04a2f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:45:01.478915  564395 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 18:45:01.481469  564395 out.go:177] * Verifying Kubernetes components...
	I0307 18:45:01.479017  564395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 18:45:01.479189  564395 config.go:182] Loaded profile config "addons-678595": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 18:45:01.479198  564395 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0307 18:45:01.483679  564395 addons.go:69] Setting yakd=true in profile "addons-678595"
	I0307 18:45:01.483721  564395 addons.go:234] Setting addon yakd=true in "addons-678595"
	I0307 18:45:01.483764  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.484302  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.484370  564395 addons.go:69] Setting ingress=true in profile "addons-678595"
	I0307 18:45:01.484395  564395 addons.go:234] Setting addon ingress=true in "addons-678595"
	I0307 18:45:01.484437  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.484826  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.488803  564395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:45:01.489019  564395 addons.go:69] Setting ingress-dns=true in profile "addons-678595"
	I0307 18:45:01.489057  564395 addons.go:234] Setting addon ingress-dns=true in "addons-678595"
	I0307 18:45:01.489171  564395 addons.go:69] Setting cloud-spanner=true in profile "addons-678595"
	I0307 18:45:01.489193  564395 addons.go:234] Setting addon cloud-spanner=true in "addons-678595"
	I0307 18:45:01.489222  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.489676  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.490058  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.490407  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.493683  564395 addons.go:69] Setting inspektor-gadget=true in profile "addons-678595"
	I0307 18:45:01.493727  564395 addons.go:234] Setting addon inspektor-gadget=true in "addons-678595"
	I0307 18:45:01.493762  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.494199  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.494557  564395 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-678595"
	I0307 18:45:01.494625  564395 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-678595"
	I0307 18:45:01.494654  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.495070  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.508058  564395 addons.go:69] Setting default-storageclass=true in profile "addons-678595"
	I0307 18:45:01.508101  564395 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-678595"
	I0307 18:45:01.508434  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.511120  564395 addons.go:69] Setting metrics-server=true in profile "addons-678595"
	I0307 18:45:01.511163  564395 addons.go:234] Setting addon metrics-server=true in "addons-678595"
	I0307 18:45:01.511201  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.511661  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.532844  564395 addons.go:69] Setting gcp-auth=true in profile "addons-678595"
	I0307 18:45:01.532912  564395 mustload.go:65] Loading cluster: addons-678595
	I0307 18:45:01.533115  564395 config.go:182] Loaded profile config "addons-678595": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 18:45:01.533382  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.534323  564395 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-678595"
	I0307 18:45:01.534359  564395 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-678595"
	I0307 18:45:01.534393  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.534811  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.549592  564395 addons.go:69] Setting registry=true in profile "addons-678595"
	I0307 18:45:01.549638  564395 addons.go:234] Setting addon registry=true in "addons-678595"
	I0307 18:45:01.549759  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.550412  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.573082  564395 addons.go:69] Setting storage-provisioner=true in profile "addons-678595"
	I0307 18:45:01.573235  564395 addons.go:234] Setting addon storage-provisioner=true in "addons-678595"
	I0307 18:45:01.573354  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.573908  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.593730  564395 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-678595"
	I0307 18:45:01.593778  564395 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-678595"
	I0307 18:45:01.594085  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.595234  564395 addons.go:69] Setting volumesnapshots=true in profile "addons-678595"
	I0307 18:45:01.595313  564395 addons.go:234] Setting addon volumesnapshots=true in "addons-678595"
	I0307 18:45:01.595379  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.601221  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.643670  564395 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0307 18:45:01.646641  564395 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0307 18:45:01.646708  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0307 18:45:01.646814  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:01.681889  564395 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0307 18:45:01.708916  564395 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 18:45:01.716161  564395 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0307 18:45:01.716308  564395 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0307 18:45:01.716374  564395 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0307 18:45:01.719482  564395 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0307 18:45:01.723010  564395 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 18:45:01.723120  564395 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 18:45:01.729567  564395 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0307 18:45:01.732994  564395 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 18:45:01.733061  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 18:45:01.733157  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:01.729576  564395 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0307 18:45:01.729601  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0307 18:45:01.730068  564395 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0307 18:45:01.730143  564395 addons.go:234] Setting addon default-storageclass=true in "addons-678595"
	I0307 18:45:01.737387  564395 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0307 18:45:01.738039  564395 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 18:45:01.739056  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0307 18:45:01.739140  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:01.762911  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.763570  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.767439  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0307 18:45:01.767606  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:01.780014  564395 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0307 18:45:01.782305  564395 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0307 18:45:01.784574  564395 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0307 18:45:01.790065  564395 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0307 18:45:01.794920  564395 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0307 18:45:01.808545  564395 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0307 18:45:01.738107  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:01.738115  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0307 18:45:01.810390  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:01.817874  564395 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0307 18:45:01.817898  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0307 18:45:01.817971  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:01.843726  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:01.844147  564395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 18:45:01.844246  564395 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 18:45:01.850905  564395 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0307 18:45:01.853089  564395 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0307 18:45:01.862978  564395 out.go:177]   - Using image docker.io/registry:2.8.3
	I0307 18:45:01.853820  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0307 18:45:01.853906  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.868171  564395 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 18:45:01.864943  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:01.872133  564395 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-678595"
	I0307 18:45:01.872542  564395 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:45:01.875217  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 18:45:01.875354  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:01.920809  564395 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0307 18:45:01.923560  564395 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0307 18:45:01.923585  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0307 18:45:01.923656  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:01.956191  564395 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0307 18:45:01.954305  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:01.955241  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:01.958832  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:01.980806  564395 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 18:45:01.980827  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0307 18:45:01.980896  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:02.006368  564395 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 18:45:02.006449  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 18:45:02.006569  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:02.025409  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:02.031450  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:02.055713  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:02.077907  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:02.089727  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:02.133969  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:02.134093  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:02.143948  564395 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0307 18:45:02.141991  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:02.149298  564395 out.go:177]   - Using image docker.io/busybox:stable
	I0307 18:45:02.151205  564395 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 18:45:02.151226  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0307 18:45:02.151292  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:02.185196  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:02.190833  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:02.198986  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:02.285694  564395 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0307 18:45:02.285720  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0307 18:45:02.396731  564395 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0307 18:45:02.396753  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0307 18:45:02.521557  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 18:45:02.550592  564395 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 18:45:02.550671  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0307 18:45:02.639184  564395 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0307 18:45:02.639259  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0307 18:45:02.661804  564395 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0307 18:45:02.661880  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0307 18:45:02.666708  564395 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0307 18:45:02.666781  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0307 18:45:02.684437  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 18:45:02.760426  564395 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0307 18:45:02.760503  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0307 18:45:02.798255  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0307 18:45:02.823186  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:45:02.828769  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 18:45:02.834250  564395 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 18:45:02.834328  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 18:45:02.843376  564395 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0307 18:45:02.843406  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0307 18:45:02.865845  564395 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0307 18:45:02.865869  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0307 18:45:02.910273  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 18:45:03.050879  564395 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0307 18:45:03.050903  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0307 18:45:03.084313  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 18:45:03.140496  564395 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 18:45:03.140523  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 18:45:03.146378  564395 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0307 18:45:03.146449  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0307 18:45:03.149841  564395 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0307 18:45:03.149912  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0307 18:45:03.185647  564395 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0307 18:45:03.185721  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0307 18:45:03.242639  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0307 18:45:03.383896  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 18:45:03.387782  564395 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0307 18:45:03.387853  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0307 18:45:03.390725  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0307 18:45:03.411829  564395 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0307 18:45:03.411905  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0307 18:45:03.444035  564395 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0307 18:45:03.444109  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0307 18:45:03.595706  564395 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0307 18:45:03.595779  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0307 18:45:03.628899  564395 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0307 18:45:03.628973  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0307 18:45:03.685444  564395 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0307 18:45:03.685535  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0307 18:45:03.835274  564395 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0307 18:45:03.835346  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0307 18:45:03.848725  564395 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0307 18:45:03.848799  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0307 18:45:03.913317  564395 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 18:45:03.913339  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0307 18:45:03.945169  564395 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0307 18:45:03.945194  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0307 18:45:03.973871  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 18:45:04.029173  564395 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0307 18:45:04.029202  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0307 18:45:04.102283  564395 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0307 18:45:04.102309  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0307 18:45:04.143638  564395 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.295149767s)
	I0307 18:45:04.144471  564395 node_ready.go:35] waiting up to 6m0s for node "addons-678595" to be "Ready" ...
	I0307 18:45:04.144665  564395 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.296925639s)
	I0307 18:45:04.144687  564395 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0307 18:45:04.149891  564395 node_ready.go:49] node "addons-678595" has status "Ready":"True"
	I0307 18:45:04.149919  564395 node_ready.go:38] duration metric: took 5.42102ms for node "addons-678595" to be "Ready" ...
	I0307 18:45:04.149929  564395 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:45:04.185192  564395 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8dm2w" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:04.202996  564395 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0307 18:45:04.203022  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0307 18:45:04.262225  564395 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0307 18:45:04.262251  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0307 18:45:04.288233  564395 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 18:45:04.288265  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0307 18:45:04.312895  564395 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 18:45:04.312921  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0307 18:45:04.441628  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 18:45:04.649216  564395 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-678595" context rescaled to 1 replicas
	I0307 18:45:04.698178  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 18:45:06.192228  564395 pod_ready.go:102] pod "coredns-5dd5756b68-8dm2w" in "kube-system" namespace has status "Ready":"False"
	I0307 18:45:08.221589  564395 pod_ready.go:102] pod "coredns-5dd5756b68-8dm2w" in "kube-system" namespace has status "Ready":"False"
	I0307 18:45:08.675835  564395 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0307 18:45:08.675931  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:08.703626  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:09.104120  564395 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0307 18:45:09.253142  564395 addons.go:234] Setting addon gcp-auth=true in "addons-678595"
	I0307 18:45:09.253250  564395 host.go:66] Checking if "addons-678595" exists ...
	I0307 18:45:09.253764  564395 cli_runner.go:164] Run: docker container inspect addons-678595 --format={{.State.Status}}
	I0307 18:45:09.277603  564395 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0307 18:45:09.277666  564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-678595
	I0307 18:45:09.308918  564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/addons-678595/id_rsa Username:docker}
	I0307 18:45:09.994218  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.472582942s)
	I0307 18:45:09.994294  564395 addons.go:470] Verifying addon ingress=true in "addons-678595"
	I0307 18:45:09.996667  564395 out.go:177] * Verifying ingress addon...
	I0307 18:45:09.994496  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.309993446s)
	I0307 18:45:09.994519  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.196192091s)
	I0307 18:45:09.994554  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.171302886s)
	I0307 18:45:09.994570  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.165727718s)
	I0307 18:45:09.994620  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.084325668s)
	I0307 18:45:09.994637  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.910299825s)
	I0307 18:45:09.994657  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.751947473s)
	I0307 18:45:09.994732  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.610763788s)
	I0307 18:45:09.994761  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.603979923s)
	I0307 18:45:09.994844  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.020947531s)
	I0307 18:45:09.997074  564395 addons.go:470] Verifying addon metrics-server=true in "addons-678595"
	I0307 18:45:09.997162  564395 addons.go:470] Verifying addon registry=true in "addons-678595"
	W0307 18:45:09.997237  564395 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 18:45:10.000175  564395 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0307 18:45:10.001066  564395 retry.go:31] will retry after 126.970182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 18:45:10.000795  564395 out.go:177] * Verifying registry addon...
	I0307 18:45:10.004244  564395 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0307 18:45:10.000805  564395 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-678595 service yakd-dashboard -n yakd-dashboard
	
	W0307 18:45:10.026009  564395 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0307 18:45:10.027277  564395 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0307 18:45:10.027302  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:10.027810  564395 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0307 18:45:10.027841  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:10.128508  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 18:45:10.600657  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:10.605351  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:10.703056  564395 pod_ready.go:102] pod "coredns-5dd5756b68-8dm2w" in "kube-system" namespace has status "Ready":"False"
	I0307 18:45:11.024031  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:11.024562  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:11.559009  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:11.561851  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:11.588451  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.146754345s)
	I0307 18:45:11.588528  564395 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-678595"
	I0307 18:45:11.591068  564395 out.go:177] * Verifying csi-hostpath-driver addon...
	I0307 18:45:11.588776  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.890516554s)
	I0307 18:45:11.588804  564395 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.311177722s)
	I0307 18:45:11.596608  564395 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 18:45:11.594747  564395 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0307 18:45:11.601497  564395 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0307 18:45:11.603931  564395 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0307 18:45:11.603987  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0307 18:45:11.619036  564395 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0307 18:45:11.619068  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:11.664202  564395 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0307 18:45:11.664229  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0307 18:45:11.708688  564395 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 18:45:11.708761  564395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0307 18:45:11.734632  564395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 18:45:12.011494  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:12.026794  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:12.105709  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:12.241433  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.112861562s)
	I0307 18:45:12.506424  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:12.509824  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:12.613665  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:12.711153  564395 pod_ready.go:102] pod "coredns-5dd5756b68-8dm2w" in "kube-system" namespace has status "Ready":"False"
	I0307 18:45:12.774945  564395 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.040227326s)
	I0307 18:45:12.778052  564395 addons.go:470] Verifying addon gcp-auth=true in "addons-678595"
	I0307 18:45:12.781806  564395 out.go:177] * Verifying gcp-auth addon...
	I0307 18:45:12.784652  564395 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0307 18:45:12.790194  564395 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0307 18:45:12.790222  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:13.014246  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:13.014255  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:13.105657  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:13.289188  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:13.505092  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:13.509376  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:13.605269  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:13.788792  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:14.007550  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:14.010427  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:14.105336  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:14.289089  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:14.505958  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:14.511474  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:14.605413  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:14.788397  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:15.017114  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:15.018647  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:15.105729  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:15.192390  564395 pod_ready.go:102] pod "coredns-5dd5756b68-8dm2w" in "kube-system" namespace has status "Ready":"False"
	I0307 18:45:15.288763  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:15.506923  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:15.510004  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:15.605260  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:15.788111  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:16.005669  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:16.010187  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:16.105690  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:16.295791  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:16.505573  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:16.509973  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:16.604922  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:16.789029  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:17.007221  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:17.009577  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:17.105314  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:17.288724  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:17.506511  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:17.510346  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:17.604959  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:17.692309  564395 pod_ready.go:92] pod "coredns-5dd5756b68-8dm2w" in "kube-system" namespace has status "Ready":"True"
	I0307 18:45:17.692336  564395 pod_ready.go:81] duration metric: took 13.50710641s for pod "coredns-5dd5756b68-8dm2w" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:17.692348  564395 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g7dm8" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:17.694543  564395 pod_ready.go:97] error getting pod "coredns-5dd5756b68-g7dm8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-g7dm8" not found
	I0307 18:45:17.694569  564395 pod_ready.go:81] duration metric: took 2.214686ms for pod "coredns-5dd5756b68-g7dm8" in "kube-system" namespace to be "Ready" ...
	E0307 18:45:17.694580  564395 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-g7dm8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-g7dm8" not found
	I0307 18:45:17.694587  564395 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-678595" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:17.699703  564395 pod_ready.go:92] pod "etcd-addons-678595" in "kube-system" namespace has status "Ready":"True"
	I0307 18:45:17.699727  564395 pod_ready.go:81] duration metric: took 5.129108ms for pod "etcd-addons-678595" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:17.699742  564395 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-678595" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:17.705034  564395 pod_ready.go:92] pod "kube-apiserver-addons-678595" in "kube-system" namespace has status "Ready":"True"
	I0307 18:45:17.705059  564395 pod_ready.go:81] duration metric: took 5.307954ms for pod "kube-apiserver-addons-678595" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:17.705071  564395 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-678595" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:17.709997  564395 pod_ready.go:92] pod "kube-controller-manager-addons-678595" in "kube-system" namespace has status "Ready":"True"
	I0307 18:45:17.710021  564395 pod_ready.go:81] duration metric: took 4.94209ms for pod "kube-controller-manager-addons-678595" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:17.710034  564395 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6s7s" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:17.788847  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:17.890311  564395 pod_ready.go:92] pod "kube-proxy-j6s7s" in "kube-system" namespace has status "Ready":"True"
	I0307 18:45:17.890339  564395 pod_ready.go:81] duration metric: took 180.296935ms for pod "kube-proxy-j6s7s" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:17.890354  564395 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-678595" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:18.006521  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:18.012295  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:18.105221  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:18.289955  564395 pod_ready.go:92] pod "kube-scheduler-addons-678595" in "kube-system" namespace has status "Ready":"True"
	I0307 18:45:18.289978  564395 pod_ready.go:81] duration metric: took 399.616476ms for pod "kube-scheduler-addons-678595" in "kube-system" namespace to be "Ready" ...
	I0307 18:45:18.289987  564395 pod_ready.go:38] duration metric: took 14.140047813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:45:18.290001  564395 api_server.go:52] waiting for apiserver process to appear ...
	I0307 18:45:18.290089  564395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:45:18.290723  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:18.304163  564395 api_server.go:72] duration metric: took 16.825209351s to wait for apiserver process to appear ...
	I0307 18:45:18.304232  564395 api_server.go:88] waiting for apiserver healthz status ...
	I0307 18:45:18.304259  564395 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0307 18:45:18.313083  564395 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0307 18:45:18.315399  564395 api_server.go:141] control plane version: v1.28.4
	I0307 18:45:18.315421  564395 api_server.go:131] duration metric: took 11.175517ms to wait for apiserver health ...
	I0307 18:45:18.315430  564395 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 18:45:18.496519  564395 system_pods.go:59] 18 kube-system pods found
	I0307 18:45:18.496600  564395 system_pods.go:61] "coredns-5dd5756b68-8dm2w" [7db6877e-bad1-4b89-b757-cfea9ded1200] Running
	I0307 18:45:18.496619  564395 system_pods.go:61] "csi-hostpath-attacher-0" [478850f4-4d80-4680-84b7-6682b5559254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 18:45:18.496628  564395 system_pods.go:61] "csi-hostpath-resizer-0" [35bcb904-5042-41f0-9cd7-5e8db6249feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 18:45:18.496639  564395 system_pods.go:61] "csi-hostpathplugin-qjctj" [a630d410-e7ac-48bd-a80d-22401b824a23] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 18:45:18.496645  564395 system_pods.go:61] "etcd-addons-678595" [f8f5655d-c685-4980-b4fc-5f32f4817c8c] Running
	I0307 18:45:18.496650  564395 system_pods.go:61] "kindnet-rbl2v" [8904488f-51f1-44af-9746-ce5fe65d61d6] Running
	I0307 18:45:18.496670  564395 system_pods.go:61] "kube-apiserver-addons-678595" [a76bdbf7-e2a4-48cf-9d41-509a7b55473c] Running
	I0307 18:45:18.496682  564395 system_pods.go:61] "kube-controller-manager-addons-678595" [5648066f-8662-4672-bb50-a82cd993f8fb] Running
	I0307 18:45:18.496691  564395 system_pods.go:61] "kube-ingress-dns-minikube" [a7676989-e5d8-451d-ae61-e0ca11bb3547] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 18:45:18.496709  564395 system_pods.go:61] "kube-proxy-j6s7s" [c12b08f0-817d-4014-b773-d0ed05d405c5] Running
	I0307 18:45:18.496724  564395 system_pods.go:61] "kube-scheduler-addons-678595" [df0c1de0-7787-42d4-8434-8b5ddc9d67af] Running
	I0307 18:45:18.496730  564395 system_pods.go:61] "metrics-server-69cf46c98-hs8l8" [06c1f01e-1aed-470e-a746-3a378cb00b9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 18:45:18.496737  564395 system_pods.go:61] "nvidia-device-plugin-daemonset-pm597" [cd04a706-2c4c-4b67-b86d-138b482338ac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0307 18:45:18.496750  564395 system_pods.go:61] "registry-dv5nj" [d4a278a3-fe04-4ecf-959a-50457864474e] Running
	I0307 18:45:18.496756  564395 system_pods.go:61] "registry-proxy-crp5d" [c9166acd-180e-4f89-aa07-bdab8898c012] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 18:45:18.496763  564395 system_pods.go:61] "snapshot-controller-58dbcc7b99-bhnxl" [dd618694-56fc-4f7a-b16b-85effb03c74d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 18:45:18.496783  564395 system_pods.go:61] "snapshot-controller-58dbcc7b99-whmwf" [326e768d-dfbe-446f-8722-931adb2471ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 18:45:18.496794  564395 system_pods.go:61] "storage-provisioner" [4c2dedee-9a07-4117-8dba-0311175adfde] Running
	I0307 18:45:18.496804  564395 system_pods.go:74] duration metric: took 181.367401ms to wait for pod list to return data ...
	I0307 18:45:18.496818  564395 default_sa.go:34] waiting for default service account to be created ...
	I0307 18:45:18.506754  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:18.511459  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:18.605871  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:18.689721  564395 default_sa.go:45] found service account: "default"
	I0307 18:45:18.689749  564395 default_sa.go:55] duration metric: took 192.922714ms for default service account to be created ...
	I0307 18:45:18.689759  564395 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 18:45:18.788760  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:18.896691  564395 system_pods.go:86] 18 kube-system pods found
	I0307 18:45:18.896727  564395 system_pods.go:89] "coredns-5dd5756b68-8dm2w" [7db6877e-bad1-4b89-b757-cfea9ded1200] Running
	I0307 18:45:18.896738  564395 system_pods.go:89] "csi-hostpath-attacher-0" [478850f4-4d80-4680-84b7-6682b5559254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 18:45:18.896745  564395 system_pods.go:89] "csi-hostpath-resizer-0" [35bcb904-5042-41f0-9cd7-5e8db6249feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 18:45:18.896755  564395 system_pods.go:89] "csi-hostpathplugin-qjctj" [a630d410-e7ac-48bd-a80d-22401b824a23] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 18:45:18.896761  564395 system_pods.go:89] "etcd-addons-678595" [f8f5655d-c685-4980-b4fc-5f32f4817c8c] Running
	I0307 18:45:18.896766  564395 system_pods.go:89] "kindnet-rbl2v" [8904488f-51f1-44af-9746-ce5fe65d61d6] Running
	I0307 18:45:18.896771  564395 system_pods.go:89] "kube-apiserver-addons-678595" [a76bdbf7-e2a4-48cf-9d41-509a7b55473c] Running
	I0307 18:45:18.896782  564395 system_pods.go:89] "kube-controller-manager-addons-678595" [5648066f-8662-4672-bb50-a82cd993f8fb] Running
	I0307 18:45:18.896791  564395 system_pods.go:89] "kube-ingress-dns-minikube" [a7676989-e5d8-451d-ae61-e0ca11bb3547] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 18:45:18.896799  564395 system_pods.go:89] "kube-proxy-j6s7s" [c12b08f0-817d-4014-b773-d0ed05d405c5] Running
	I0307 18:45:18.896804  564395 system_pods.go:89] "kube-scheduler-addons-678595" [df0c1de0-7787-42d4-8434-8b5ddc9d67af] Running
	I0307 18:45:18.896809  564395 system_pods.go:89] "metrics-server-69cf46c98-hs8l8" [06c1f01e-1aed-470e-a746-3a378cb00b9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 18:45:18.896825  564395 system_pods.go:89] "nvidia-device-plugin-daemonset-pm597" [cd04a706-2c4c-4b67-b86d-138b482338ac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0307 18:45:18.896829  564395 system_pods.go:89] "registry-dv5nj" [d4a278a3-fe04-4ecf-959a-50457864474e] Running
	I0307 18:45:18.896838  564395 system_pods.go:89] "registry-proxy-crp5d" [c9166acd-180e-4f89-aa07-bdab8898c012] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 18:45:18.896848  564395 system_pods.go:89] "snapshot-controller-58dbcc7b99-bhnxl" [dd618694-56fc-4f7a-b16b-85effb03c74d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 18:45:18.896855  564395 system_pods.go:89] "snapshot-controller-58dbcc7b99-whmwf" [326e768d-dfbe-446f-8722-931adb2471ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 18:45:18.896859  564395 system_pods.go:89] "storage-provisioner" [4c2dedee-9a07-4117-8dba-0311175adfde] Running
	I0307 18:45:18.896867  564395 system_pods.go:126] duration metric: took 207.10258ms to wait for k8s-apps to be running ...
	I0307 18:45:18.896882  564395 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 18:45:18.896936  564395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:45:18.910121  564395 system_svc.go:56] duration metric: took 13.227653ms WaitForService to wait for kubelet
	I0307 18:45:18.910170  564395 kubeadm.go:576] duration metric: took 17.431220376s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 18:45:18.910192  564395 node_conditions.go:102] verifying NodePressure condition ...
	I0307 18:45:19.027364  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:19.033731  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:19.101540  564395 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0307 18:45:19.101621  564395 node_conditions.go:123] node cpu capacity is 2
	I0307 18:45:19.101654  564395 node_conditions.go:105] duration metric: took 191.455099ms to run NodePressure ...
	I0307 18:45:19.101681  564395 start.go:240] waiting for startup goroutines ...
	I0307 18:45:19.108761  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:19.288618  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:19.505453  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:19.510212  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:19.605295  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:19.789391  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:20.028591  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:20.033960  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:20.106656  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:20.288589  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:20.507332  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:20.512408  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:20.610220  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:20.790646  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:21.013022  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:21.013974  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:21.107147  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:21.288512  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:21.513007  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:21.513988  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:21.605098  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:21.789057  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:22.019808  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:22.021327  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:22.105772  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:22.288736  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:22.506922  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:22.510072  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:22.604864  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:22.788852  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:23.007924  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:23.011016  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:23.104840  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:23.288534  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:23.505437  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:23.510585  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:23.605407  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:23.789011  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:24.009584  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:24.010445  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:45:24.105271  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:24.288790  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:24.506578  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:24.511949  564395 kapi.go:107] duration metric: took 14.507705571s to wait for kubernetes.io/minikube-addons=registry ...
	I0307 18:45:24.606578  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:24.788328  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:25.007452  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:25.106813  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:25.290307  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:25.506703  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:25.606638  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:25.789705  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:26.011333  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:26.108589  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:26.288449  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:26.506383  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:26.604820  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:26.788734  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:27.005982  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:27.106494  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:27.288108  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:27.506121  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:27.608970  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:27.788641  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:28.006112  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:28.106898  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:28.288503  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:28.505133  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:28.604619  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:28.788501  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:29.009582  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:29.107117  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:29.288977  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:29.506022  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:29.605011  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:29.788912  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:30.031053  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:30.106976  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:30.289233  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:30.506709  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:30.606026  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:30.788476  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:31.006422  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:31.105621  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:31.288396  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:31.506750  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:31.608859  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:31.790673  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:32.006691  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:32.105870  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:32.288567  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:32.505729  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:32.605538  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:32.791447  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:33.008750  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:33.113340  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:33.289834  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:33.506885  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:33.605417  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:33.789153  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:34.011106  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:34.104789  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:34.288338  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:34.505998  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:34.604256  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:34.789230  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:35.015436  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:35.105202  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:35.288584  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:35.507571  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:35.608946  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:35.788837  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:36.012636  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:36.107225  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:36.288961  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:36.505659  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:36.605078  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:36.788490  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:37.008889  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:37.106177  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:37.289023  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:37.507984  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:37.604974  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:37.788909  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:38.007986  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:38.105745  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:38.288535  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:38.513001  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:38.605438  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:38.789421  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:39.007141  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:39.106162  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:39.288969  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:39.509303  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:39.604787  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:39.788945  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:40.018435  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:40.105614  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:40.288441  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:40.519579  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:40.606416  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:40.789035  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:41.006122  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:41.105415  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:41.290024  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:41.506845  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:41.606592  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:41.788642  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:42.008109  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:42.107020  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:42.290463  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:42.505901  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:42.606310  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:42.793875  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:43.005658  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:43.105035  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:43.289685  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:43.504980  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:43.604528  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:43.789068  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:44.006543  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:44.105892  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:44.289331  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:44.506500  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:44.607206  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:44.789216  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:45.007670  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:45.114545  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:45.289612  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:45.505588  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:45.605683  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:45.788665  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:46.006098  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:46.105179  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:46.288781  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:46.505669  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:46.605359  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:46.788965  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:47.007248  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:47.105221  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:47.288962  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:47.505486  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:47.608285  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:47.793462  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:48.006975  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:48.105704  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:48.288205  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:48.507491  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:48.606095  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:48.789240  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:49.006051  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:49.105069  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:49.292975  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:49.505656  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:49.609978  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:49.789477  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:50.007933  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:50.106523  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:50.288638  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:50.505634  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:50.606183  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:50.788986  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:51.006861  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:51.105956  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:51.289588  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:51.506555  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:51.604860  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:51.790704  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:52.006991  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:52.104801  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:52.288629  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:52.505551  564395 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:45:52.607866  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:52.789548  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:53.007082  564395 kapi.go:107] duration metric: took 43.006916558s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0307 18:45:53.104809  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:53.288941  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:53.605364  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:53.789411  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:54.105246  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:54.289773  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:54.604694  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:54.788421  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:55.105773  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:55.289234  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:45:55.605040  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:55.788704  564395 kapi.go:107] duration metric: took 43.004049861s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0307 18:45:55.791225  564395 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-678595 cluster.
	I0307 18:45:55.793419  564395 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0307 18:45:55.795833  564395 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0307 18:45:56.105289  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:56.604854  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:57.105570  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:57.605589  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:58.105161  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:58.617407  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:59.105956  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:45:59.605877  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:46:00.112482  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:46:00.607134  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:46:01.105323  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:46:01.605400  564395 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:46:02.108186  564395 kapi.go:107] duration metric: took 50.513436258s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0307 18:46:02.110676  564395 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0307 18:46:02.112715  564395 addons.go:505] duration metric: took 1m0.6335082s for enable addons: enabled=[ingress-dns cloud-spanner storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0307 18:46:02.112772  564395 start.go:245] waiting for cluster config update ...
	I0307 18:46:02.112793  564395 start.go:254] writing updated cluster config ...
	I0307 18:46:02.113067  564395 ssh_runner.go:195] Run: rm -f paused
	I0307 18:46:02.559345  564395 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0307 18:46:02.568931  564395 out.go:177] * Done! kubectl is now configured to use "addons-678595" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	2f8386bec5b32       dd1b12fcb6097       9 seconds ago        Exited              hello-world-app            2                   27060fbd70eca       hello-world-app-5d77478584-g7nt7
	dc10032be3d9f       be5e6f23a9904       33 seconds ago       Running             nginx                      0                   2e792a9026343       nginx
	2f27151eaee42       760b7cbba31e1       35 seconds ago       Exited              task-pv-container          0                   4b3962e854eab       task-pv-pod
	d2cbf035090a1       bafe72500920c       About a minute ago   Running             gcp-auth                   0                   4906138f42ad2       gcp-auth-5f6b4f85fd-f6g8s
	714426b66dac8       c0cfb4ce73bda       About a minute ago   Running             nvidia-device-plugin-ctr   0                   8d547472e3dc5       nvidia-device-plugin-daemonset-pm597
	a26a1b58db38c       41340d5d57adb       About a minute ago   Running             cloud-spanner-emulator     0                   1ed1edcbdcef6       cloud-spanner-emulator-6548d5df46-k64xn
	642242f0f4d18       1a024e390dd05       About a minute ago   Exited              patch                      0                   d461deab66103       ingress-nginx-admission-patch-9km6j
	c1506a6d81565       1a024e390dd05       About a minute ago   Exited              create                     0                   1319684a35775       ingress-nginx-admission-create-kz749
	6b320f85b49bb       20e3f2db01e81       About a minute ago   Running             yakd                       0                   ab28a4f96f30c       yakd-dashboard-9947fc6bf-25ljl
	29d71e9a37cd7       7ce2150c8929b       About a minute ago   Running             local-path-provisioner     0                   96130dac57973       local-path-provisioner-78b46b4d5c-hw9mr
	076a8a1106b01       97e04611ad434       About a minute ago   Running             coredns                    0                   48408ef4ae48e       coredns-5dd5756b68-8dm2w
	9d3707827ce7f       ba04bb24b9575       2 minutes ago        Running             storage-provisioner        0                   18b365195b8d1       storage-provisioner
	710e665fc1adc       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                0                   5b99514051210       kindnet-rbl2v
	f945adadb9ed8       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                 0                   da804c472a91e       kube-proxy-j6s7s
	51b18bced0e30       9cdd6470f48c8       2 minutes ago        Running             etcd                       0                   e5a4a872b8808       etcd-addons-678595
	9b2440ccbf853       9961cbceaf234       2 minutes ago        Running             kube-controller-manager    0                   eff03127bdf99       kube-controller-manager-addons-678595
	fe5483cb8238c       05c284c929889       2 minutes ago        Running             kube-scheduler             0                   aef3694d08832       kube-scheduler-addons-678595
	702bf365fe76b       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver             0                   8fdcc96f23284       kube-apiserver-addons-678595
	
	
	==> containerd <==
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.156046008Z" level=warning msg="cleanup warnings time=\"2024-03-07T18:47:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9156 runtime=io.containerd.runc.v2\n"
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.163248085Z" level=info msg="StopContainer for \"84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf\" returns successfully"
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.164044657Z" level=info msg="StopPodSandbox for \"d7fbe6035d10d2416bd162b7d84ad0a8321e964e0ccb50819ef6de925f7270ab\""
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.164112825Z" level=info msg="Container to stop \"84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.166057023Z" level=info msg="StopContainer for \"e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76\" returns successfully"
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.166737182Z" level=info msg="StopPodSandbox for \"6b17452dd58b608a99a03d8df33fab2c1cfac34c7df62ce839ef5de35657ab92\""
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.166794953Z" level=info msg="Container to stop \"e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.221292708Z" level=info msg="shim disconnected" id=d7fbe6035d10d2416bd162b7d84ad0a8321e964e0ccb50819ef6de925f7270ab
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.221715400Z" level=warning msg="cleaning up after shim disconnected" id=d7fbe6035d10d2416bd162b7d84ad0a8321e964e0ccb50819ef6de925f7270ab namespace=k8s.io
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.221813098Z" level=info msg="cleaning up dead shim"
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.231393191Z" level=info msg="shim disconnected" id=6b17452dd58b608a99a03d8df33fab2c1cfac34c7df62ce839ef5de35657ab92
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.231963353Z" level=warning msg="cleaning up after shim disconnected" id=6b17452dd58b608a99a03d8df33fab2c1cfac34c7df62ce839ef5de35657ab92 namespace=k8s.io
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.232157887Z" level=info msg="cleaning up dead shim"
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.236104572Z" level=warning msg="cleanup warnings time=\"2024-03-07T18:47:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9226 runtime=io.containerd.runc.v2\n"
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.245316684Z" level=warning msg="cleanup warnings time=\"2024-03-07T18:47:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9241 runtime=io.containerd.runc.v2\n"
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.282401767Z" level=info msg="TearDown network for sandbox \"d7fbe6035d10d2416bd162b7d84ad0a8321e964e0ccb50819ef6de925f7270ab\" successfully"
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.282456216Z" level=info msg="StopPodSandbox for \"d7fbe6035d10d2416bd162b7d84ad0a8321e964e0ccb50819ef6de925f7270ab\" returns successfully"
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.298704720Z" level=info msg="TearDown network for sandbox \"6b17452dd58b608a99a03d8df33fab2c1cfac34c7df62ce839ef5de35657ab92\" successfully"
	Mar 07 18:47:12 addons-678595 containerd[762]: time="2024-03-07T18:47:12.298754451Z" level=info msg="StopPodSandbox for \"6b17452dd58b608a99a03d8df33fab2c1cfac34c7df62ce839ef5de35657ab92\" returns successfully"
	Mar 07 18:47:13 addons-678595 containerd[762]: time="2024-03-07T18:47:13.103201973Z" level=info msg="RemoveContainer for \"e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76\""
	Mar 07 18:47:13 addons-678595 containerd[762]: time="2024-03-07T18:47:13.111803112Z" level=info msg="RemoveContainer for \"e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76\" returns successfully"
	Mar 07 18:47:13 addons-678595 containerd[762]: time="2024-03-07T18:47:13.114907203Z" level=error msg="ContainerStatus for \"e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76\": not found"
	Mar 07 18:47:13 addons-678595 containerd[762]: time="2024-03-07T18:47:13.116702700Z" level=info msg="RemoveContainer for \"84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf\""
	Mar 07 18:47:13 addons-678595 containerd[762]: time="2024-03-07T18:47:13.124467571Z" level=info msg="RemoveContainer for \"84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf\" returns successfully"
	Mar 07 18:47:13 addons-678595 containerd[762]: time="2024-03-07T18:47:13.125074491Z" level=error msg="ContainerStatus for \"84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf\": not found"
	
	
	==> coredns [076a8a1106b01b7bf4eac181094b45d3228db2dfadcc3db6703935133a79e386] <==
	[INFO] 10.244.0.19:41175 - 39325 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071096s
	[INFO] 10.244.0.19:41175 - 54756 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000095392s
	[INFO] 10.244.0.19:55728 - 43848 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002004111s
	[INFO] 10.244.0.19:55728 - 18368 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000128516s
	[INFO] 10.244.0.19:41175 - 23195 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001367637s
	[INFO] 10.244.0.19:41175 - 46378 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004472949s
	[INFO] 10.244.0.19:41175 - 63185 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000103146s
	[INFO] 10.244.0.19:56551 - 29617 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000119573s
	[INFO] 10.244.0.19:42212 - 27039 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098239s
	[INFO] 10.244.0.19:56551 - 58720 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061612s
	[INFO] 10.244.0.19:56551 - 1831 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067995s
	[INFO] 10.244.0.19:42212 - 37957 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00005938s
	[INFO] 10.244.0.19:42212 - 32594 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049427s
	[INFO] 10.244.0.19:56551 - 52791 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066854s
	[INFO] 10.244.0.19:56551 - 51022 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005723s
	[INFO] 10.244.0.19:42212 - 47357 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00012237s
	[INFO] 10.244.0.19:42212 - 25743 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056156s
	[INFO] 10.244.0.19:56551 - 9320 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083511s
	[INFO] 10.244.0.19:42212 - 61709 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050962s
	[INFO] 10.244.0.19:56551 - 21252 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001501461s
	[INFO] 10.244.0.19:42212 - 10554 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001066601s
	[INFO] 10.244.0.19:56551 - 44092 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001212249s
	[INFO] 10.244.0.19:56551 - 1770 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000091249s
	[INFO] 10.244.0.19:42212 - 57628 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00118408s
	[INFO] 10.244.0.19:42212 - 21936 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062547s
	
	
	==> describe nodes <==
	Name:               addons-678595
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-678595
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=526fad16cb967ea3a5b243df32efb88cb58b81ec
	                    minikube.k8s.io/name=addons-678595
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T18_44_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-678595
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 18:44:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-678595
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 18:47:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 18:46:50 +0000   Thu, 07 Mar 2024 18:44:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 18:46:50 +0000   Thu, 07 Mar 2024 18:44:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 18:46:50 +0000   Thu, 07 Mar 2024 18:44:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 18:46:50 +0000   Thu, 07 Mar 2024 18:44:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-678595
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d05314f01ef4e4d9024575a3ac70a9e
	  System UUID:                a2e20f66-55f7-413e-9033-f1c2d7cc6343
	  Boot ID:                    a949ea88-4a69-4ab0-89c5-986450203265
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-k64xn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  default                     hello-world-app-5d77478584-g7nt7           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  gcp-auth                    gcp-auth-5f6b4f85fd-f6g8s                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 coredns-5dd5756b68-8dm2w                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m14s
	  kube-system                 etcd-addons-678595                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m27s
	  kube-system                 kindnet-rbl2v                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m14s
	  kube-system                 kube-apiserver-addons-678595               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-controller-manager-addons-678595      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-proxy-j6s7s                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-scheduler-addons-678595               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 nvidia-device-plugin-daemonset-pm597       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  local-path-storage          local-path-provisioner-78b46b4d5c-hw9mr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-25ljl             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node addons-678595 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node addons-678595 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s (x7 over 2m35s)  kubelet          Node addons-678595 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m27s                  kubelet          Node addons-678595 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s                  kubelet          Node addons-678595 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s                  kubelet          Node addons-678595 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m27s                  kubelet          Node addons-678595 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m27s                  kubelet          Node addons-678595 status is now: NodeReady
	  Normal  RegisteredNode           2m14s                  node-controller  Node addons-678595 event: Registered Node addons-678595 in Controller
	
	
	==> dmesg <==
	[  +0.000767] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001007] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=000000003a76bb30
	[  +0.001126] FS-Cache: N-key=[8] 'df3a5c0100000000'
	[  +0.006419] FS-Cache: Duplicate cookie detected
	[  +0.000764] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=00000000fd4f0320
	[  +0.001144] FS-Cache: O-key=[8] 'df3a5c0100000000'
	[  +0.000799] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001002] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=00000000776f6318
	[  +0.001070] FS-Cache: N-key=[8] 'df3a5c0100000000'
	[  +1.794085] FS-Cache: Duplicate cookie detected
	[  +0.000796] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001000] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=000000002bfb4aa6
	[  +0.001203] FS-Cache: O-key=[8] 'de3a5c0100000000'
	[  +0.000769] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001023] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=00000000374e2f34
	[  +0.001183] FS-Cache: N-key=[8] 'de3a5c0100000000'
	[  +0.333056] FS-Cache: Duplicate cookie detected
	[  +0.000726] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001001] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=000000007434b39f
	[  +0.001097] FS-Cache: O-key=[8] 'e43a5c0100000000'
	[  +0.000794] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=000000003a76bb30
	[  +0.001095] FS-Cache: N-key=[8] 'e43a5c0100000000'
	[Mar 7 18:16] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/27/fs': -2
	
	
	==> etcd [51b18bced0e309d970db18c42d7b27e3d57b1ec3fd627a3a5ccad18865498117] <==
	{"level":"info","ts":"2024-03-07T18:44:41.854597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-07T18:44:41.865927Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-07T18:44:41.866968Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-07T18:44:41.873233Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-07T18:44:41.874147Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-07T18:44:41.873895Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-07T18:44:41.873924Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-07T18:44:42.30557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-07T18:44:42.305797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-07T18:44:42.305942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-07T18:44:42.306035Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-07T18:44:42.306112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-07T18:44:42.306202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-07T18:44:42.306285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-07T18:44:42.30911Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T18:44:42.310596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-07T18:44:42.31084Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T18:44:42.309081Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-678595 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T18:44:42.311273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T18:44:42.311981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T18:44:42.313177Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T18:44:42.313325Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T18:44:42.314635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-07T18:44:42.317772Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T18:44:42.322281Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [d2cbf035090a107843d3b74b92daae7d3f369e8dfef0e72bfd207d4b6ba2bce8] <==
	2024/03/07 18:45:54 GCP Auth Webhook started!
	2024/03/07 18:46:14 Ready to marshal response ...
	2024/03/07 18:46:14 Ready to write response ...
	2024/03/07 18:46:36 Ready to marshal response ...
	2024/03/07 18:46:36 Ready to write response ...
	2024/03/07 18:46:37 Ready to marshal response ...
	2024/03/07 18:46:37 Ready to write response ...
	2024/03/07 18:46:48 Ready to marshal response ...
	2024/03/07 18:46:48 Ready to write response ...
	2024/03/07 18:46:55 Ready to marshal response ...
	2024/03/07 18:46:55 Ready to write response ...
	
	
	==> kernel <==
	 18:47:15 up  2:29,  0 users,  load average: 1.51, 2.19, 2.46
	Linux addons-678595 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [710e665fc1adc9d65d74bd57a3089ea090924c782fcb04c30c3cfe36eff0dd81] <==
	I0307 18:45:06.540833       1 main.go:227] handling current node
	I0307 18:45:16.629467       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:45:16.629492       1 main.go:227] handling current node
	I0307 18:45:26.641780       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:45:26.641872       1 main.go:227] handling current node
	I0307 18:45:36.658326       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:45:36.658509       1 main.go:227] handling current node
	I0307 18:45:46.669372       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:45:46.669403       1 main.go:227] handling current node
	I0307 18:45:56.681371       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:45:56.681400       1 main.go:227] handling current node
	I0307 18:46:06.685969       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:46:06.685999       1 main.go:227] handling current node
	I0307 18:46:16.696599       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:46:16.696626       1 main.go:227] handling current node
	I0307 18:46:26.700231       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:46:26.700257       1 main.go:227] handling current node
	I0307 18:46:36.710472       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:46:36.710502       1 main.go:227] handling current node
	I0307 18:46:46.714776       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:46:46.714806       1 main.go:227] handling current node
	I0307 18:46:56.725588       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:46:56.725617       1 main.go:227] handling current node
	I0307 18:47:06.924210       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 18:47:06.924242       1 main.go:227] handling current node
	
	
	==> kube-apiserver [702bf365fe76b4d71d3f4357472849ada7a91e4df092169237978456d2ea487d] <==
	I0307 18:46:31.286002       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0307 18:46:32.305031       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0307 18:46:36.987295       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0307 18:46:37.351688       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.63.27"}
	I0307 18:46:47.434099       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0307 18:46:49.207172       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.235.112"}
	I0307 18:47:11.817778       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 18:47:11.817830       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 18:47:11.830375       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 18:47:11.830436       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 18:47:11.841135       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 18:47:11.841206       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 18:47:11.865118       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 18:47:11.865158       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 18:47:11.868760       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 18:47:11.868802       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 18:47:11.883397       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 18:47:11.883458       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 18:47:11.906759       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 18:47:11.906806       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 18:47:11.907382       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 18:47:11.907420       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0307 18:47:12.868903       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0307 18:47:12.908151       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0307 18:47:12.932935       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [9b2440ccbf853fac79401abad0dd05222c733d76ae060a9854d36b6595e28b15] <==
	I0307 18:46:52.924178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.479µs"
	I0307 18:46:53.928645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="94.809µs"
	I0307 18:46:54.409114       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0307 18:47:05.196859       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0307 18:47:05.316970       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	I0307 18:47:06.766812       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0307 18:47:06.772399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="19.192µs"
	I0307 18:47:06.774960       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0307 18:47:07.082152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.641µs"
	I0307 18:47:11.958087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="6.99µs"
	E0307 18:47:12.871126       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 18:47:12.910171       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 18:47:12.935048       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 18:47:13.751699       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 18:47:13.751733       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 18:47:13.875016       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 18:47:13.875052       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 18:47:13.947632       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 18:47:13.947674       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 18:47:15.631110       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 18:47:15.631160       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 18:47:15.701088       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 18:47:15.701122       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 18:47:15.701939       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 18:47:15.701967       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [f945adadb9ed8d0272ddbb09417ed510137e29ae2b6dfb687819087fae801986] <==
	I0307 18:45:02.962618       1 server_others.go:69] "Using iptables proxy"
	I0307 18:45:02.981511       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0307 18:45:03.025120       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0307 18:45:03.033932       1 server_others.go:152] "Using iptables Proxier"
	I0307 18:45:03.033980       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0307 18:45:03.033990       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0307 18:45:03.034017       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 18:45:03.034249       1 server.go:846] "Version info" version="v1.28.4"
	I0307 18:45:03.034272       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 18:45:03.035253       1 config.go:188] "Starting service config controller"
	I0307 18:45:03.035286       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 18:45:03.035312       1 config.go:97] "Starting endpoint slice config controller"
	I0307 18:45:03.035320       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 18:45:03.037096       1 config.go:315] "Starting node config controller"
	I0307 18:45:03.037129       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 18:45:03.137105       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0307 18:45:03.137155       1 shared_informer.go:318] Caches are synced for node config
	I0307 18:45:03.137165       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [fe5483cb8238c0d88456c8a52567ada4ca6d297cc51ca2db46c6fab8c6a8f1fe] <==
	W0307 18:44:44.894315       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0307 18:44:44.894562       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0307 18:44:44.894378       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0307 18:44:44.894771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0307 18:44:44.895005       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0307 18:44:44.895147       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0307 18:44:44.895365       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 18:44:44.895475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 18:44:44.895670       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 18:44:44.895825       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 18:44:45.775038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 18:44:45.775076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 18:44:45.782592       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 18:44:45.782782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0307 18:44:45.789503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 18:44:45.789760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 18:44:45.797778       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 18:44:45.797978       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0307 18:44:45.838747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0307 18:44:45.838957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0307 18:44:45.888725       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0307 18:44:45.888765       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 18:44:45.921428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0307 18:44:45.921594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0307 18:44:47.871936       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 07 18:47:10 addons-678595 kubelet[1497]: I0307 18:47:10.038155    1497 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cde17c6-4159-4e3f-9af7-9815c24d1a30-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4cde17c6-4159-4e3f-9af7-9815c24d1a30" (UID: "4cde17c6-4159-4e3f-9af7-9815c24d1a30"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 07 18:47:10 addons-678595 kubelet[1497]: I0307 18:47:10.039042    1497 scope.go:117] "RemoveContainer" containerID="108b8da08a24d691db696abfe1348b8efced90c01ad4ebdb43684f7cf40941a3"
	Mar 07 18:47:10 addons-678595 kubelet[1497]: I0307 18:47:10.041466    1497 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cde17c6-4159-4e3f-9af7-9815c24d1a30-kube-api-access-jmj8l" (OuterVolumeSpecName: "kube-api-access-jmj8l") pod "4cde17c6-4159-4e3f-9af7-9815c24d1a30" (UID: "4cde17c6-4159-4e3f-9af7-9815c24d1a30"). InnerVolumeSpecName "kube-api-access-jmj8l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 18:47:10 addons-678595 kubelet[1497]: I0307 18:47:10.047575    1497 scope.go:117] "RemoveContainer" containerID="108b8da08a24d691db696abfe1348b8efced90c01ad4ebdb43684f7cf40941a3"
	Mar 07 18:47:10 addons-678595 kubelet[1497]: E0307 18:47:10.048103    1497 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"108b8da08a24d691db696abfe1348b8efced90c01ad4ebdb43684f7cf40941a3\": not found" containerID="108b8da08a24d691db696abfe1348b8efced90c01ad4ebdb43684f7cf40941a3"
	Mar 07 18:47:10 addons-678595 kubelet[1497]: I0307 18:47:10.048151    1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"108b8da08a24d691db696abfe1348b8efced90c01ad4ebdb43684f7cf40941a3"} err="failed to get container status \"108b8da08a24d691db696abfe1348b8efced90c01ad4ebdb43684f7cf40941a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"108b8da08a24d691db696abfe1348b8efced90c01ad4ebdb43684f7cf40941a3\": not found"
	Mar 07 18:47:10 addons-678595 kubelet[1497]: I0307 18:47:10.136513    1497 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cde17c6-4159-4e3f-9af7-9815c24d1a30-webhook-cert\") on node \"addons-678595\" DevicePath \"\""
	Mar 07 18:47:10 addons-678595 kubelet[1497]: I0307 18:47:10.136557    1497 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jmj8l\" (UniqueName: \"kubernetes.io/projected/4cde17c6-4159-4e3f-9af7-9815c24d1a30-kube-api-access-jmj8l\") on node \"addons-678595\" DevicePath \"\""
	Mar 07 18:47:12 addons-678595 kubelet[1497]: I0307 18:47:12.086714    1497 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4cde17c6-4159-4e3f-9af7-9815c24d1a30" path="/var/lib/kubelet/pods/4cde17c6-4159-4e3f-9af7-9815c24d1a30/volumes"
	Mar 07 18:47:12 addons-678595 kubelet[1497]: I0307 18:47:12.383424    1497 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n2zj\" (UniqueName: \"kubernetes.io/projected/326e768d-dfbe-446f-8722-931adb2471ab-kube-api-access-7n2zj\") pod \"326e768d-dfbe-446f-8722-931adb2471ab\" (UID: \"326e768d-dfbe-446f-8722-931adb2471ab\") "
	Mar 07 18:47:12 addons-678595 kubelet[1497]: I0307 18:47:12.383485    1497 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwz5v\" (UniqueName: \"kubernetes.io/projected/dd618694-56fc-4f7a-b16b-85effb03c74d-kube-api-access-mwz5v\") pod \"dd618694-56fc-4f7a-b16b-85effb03c74d\" (UID: \"dd618694-56fc-4f7a-b16b-85effb03c74d\") "
	Mar 07 18:47:12 addons-678595 kubelet[1497]: I0307 18:47:12.385469    1497 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd618694-56fc-4f7a-b16b-85effb03c74d-kube-api-access-mwz5v" (OuterVolumeSpecName: "kube-api-access-mwz5v") pod "dd618694-56fc-4f7a-b16b-85effb03c74d" (UID: "dd618694-56fc-4f7a-b16b-85effb03c74d"). InnerVolumeSpecName "kube-api-access-mwz5v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 18:47:12 addons-678595 kubelet[1497]: I0307 18:47:12.385899    1497 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/326e768d-dfbe-446f-8722-931adb2471ab-kube-api-access-7n2zj" (OuterVolumeSpecName: "kube-api-access-7n2zj") pod "326e768d-dfbe-446f-8722-931adb2471ab" (UID: "326e768d-dfbe-446f-8722-931adb2471ab"). InnerVolumeSpecName "kube-api-access-7n2zj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 18:47:12 addons-678595 kubelet[1497]: I0307 18:47:12.484342    1497 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7n2zj\" (UniqueName: \"kubernetes.io/projected/326e768d-dfbe-446f-8722-931adb2471ab-kube-api-access-7n2zj\") on node \"addons-678595\" DevicePath \"\""
	Mar 07 18:47:12 addons-678595 kubelet[1497]: I0307 18:47:12.484389    1497 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mwz5v\" (UniqueName: \"kubernetes.io/projected/dd618694-56fc-4f7a-b16b-85effb03c74d-kube-api-access-mwz5v\") on node \"addons-678595\" DevicePath \"\""
	Mar 07 18:47:13 addons-678595 kubelet[1497]: I0307 18:47:13.098647    1497 scope.go:117] "RemoveContainer" containerID="e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76"
	Mar 07 18:47:13 addons-678595 kubelet[1497]: I0307 18:47:13.112893    1497 scope.go:117] "RemoveContainer" containerID="e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76"
	Mar 07 18:47:13 addons-678595 kubelet[1497]: E0307 18:47:13.115147    1497 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76\": not found" containerID="e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76"
	Mar 07 18:47:13 addons-678595 kubelet[1497]: I0307 18:47:13.115197    1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76"} err="failed to get container status \"e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4e3d10a004b9d9c7bf077b7a496bd5c0bb54220183a40c05e08df71a51d4b76\": not found"
	Mar 07 18:47:13 addons-678595 kubelet[1497]: I0307 18:47:13.115215    1497 scope.go:117] "RemoveContainer" containerID="84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf"
	Mar 07 18:47:13 addons-678595 kubelet[1497]: I0307 18:47:13.124786    1497 scope.go:117] "RemoveContainer" containerID="84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf"
	Mar 07 18:47:13 addons-678595 kubelet[1497]: E0307 18:47:13.126761    1497 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf\": not found" containerID="84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf"
	Mar 07 18:47:13 addons-678595 kubelet[1497]: I0307 18:47:13.126834    1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf"} err="failed to get container status \"84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf\": rpc error: code = NotFound desc = an error occurred when try to find container \"84a5f3b76a3f51c16f639da417f10c3dc64ede7babe75ec0af518e003da6cfcf\": not found"
	Mar 07 18:47:14 addons-678595 kubelet[1497]: I0307 18:47:14.018385    1497 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="326e768d-dfbe-446f-8722-931adb2471ab" path="/var/lib/kubelet/pods/326e768d-dfbe-446f-8722-931adb2471ab/volumes"
	Mar 07 18:47:14 addons-678595 kubelet[1497]: I0307 18:47:14.018881    1497 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dd618694-56fc-4f7a-b16b-85effb03c74d" path="/var/lib/kubelet/pods/dd618694-56fc-4f7a-b16b-85effb03c74d/volumes"
	
	
	==> storage-provisioner [9d3707827ce7f79e70bc77bd14f65edef7a5898530d28d2af47fb6080dd1d800] <==
	I0307 18:45:09.102796       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 18:45:09.173631       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 18:45:09.173686       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 18:45:09.183667       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 18:45:09.192858       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-678595_3f86205c-64aa-490f-9b0a-09487d290f03!
	I0307 18:45:09.194259       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56d55c4e-f0fd-4f31-9499-407427a718a3", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-678595_3f86205c-64aa-490f-9b0a-09487d290f03 became leader
	I0307 18:45:09.293386       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-678595_3f86205c-64aa-490f-9b0a-09487d290f03!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-678595 -n addons-678595
helpers_test.go:261: (dbg) Run:  kubectl --context addons-678595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (39.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image load --daemon gcr.io/google-containers/addon-resizer:functional-788559 --alsologtostderr
2024/03/07 18:52:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 image load --daemon gcr.io/google-containers/addon-resizer:functional-788559 --alsologtostderr: (4.509954468s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-788559" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image load --daemon gcr.io/google-containers/addon-resizer:functional-788559 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 image load --daemon gcr.io/google-containers/addon-resizer:functional-788559 --alsologtostderr: (3.289984453s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-788559" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.615018713s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-788559
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image load --daemon gcr.io/google-containers/addon-resizer:functional-788559 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 image load --daemon gcr.io/google-containers/addon-resizer:functional-788559 --alsologtostderr: (3.217550952s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-788559" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image save gcr.io/google-containers/addon-resizer:functional-788559 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0307 18:53:14.370853  596823 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:53:14.371446  596823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:53:14.371458  596823 out.go:304] Setting ErrFile to fd 2...
	I0307 18:53:14.371465  596823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:53:14.371715  596823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 18:53:14.372334  596823 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 18:53:14.372465  596823 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 18:53:14.372952  596823 cli_runner.go:164] Run: docker container inspect functional-788559 --format={{.State.Status}}
	I0307 18:53:14.389711  596823 ssh_runner.go:195] Run: systemctl --version
	I0307 18:53:14.389791  596823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-788559
	I0307 18:53:14.405723  596823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/functional-788559/id_rsa Username:docker}
	I0307 18:53:14.493809  596823 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0307 18:53:14.493874  596823 cache_images.go:254] Failed to load cached images for profile functional-788559. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0307 18:53:14.493894  596823 cache_images.go:262] succeeded pushing to: 
	I0307 18:53:14.493899  596823 cache_images.go:263] failed pushing to: functional-788559

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (371.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-490121 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-490121 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m8.427415912s)

                                                
                                                
-- stdout --
	* [old-k8s-version-490121] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18239
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-490121" primary control-plane node in "old-k8s-version-490121" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Restarting existing docker container for "old-k8s-version-490121" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-490121 addons enable metrics-server
	
	* Enabled addons: default-storageclass, dashboard, storage-provisioner, metrics-server
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:30:26.308206  762831 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:30:26.308868  762831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:30:26.308883  762831 out.go:304] Setting ErrFile to fd 2...
	I0307 19:30:26.308889  762831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:30:26.309246  762831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 19:30:26.309723  762831 out.go:298] Setting JSON to false
	I0307 19:30:26.310875  762831 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11570,"bootTime":1709828256,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 19:30:26.310949  762831 start.go:139] virtualization:  
	I0307 19:30:26.313810  762831 out.go:177] * [old-k8s-version-490121] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 19:30:26.316202  762831 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 19:30:26.318032  762831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:30:26.316244  762831 notify.go:220] Checking for updates...
	I0307 19:30:26.319861  762831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 19:30:26.321669  762831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	I0307 19:30:26.323603  762831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 19:30:26.325594  762831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:30:26.328419  762831 config.go:182] Loaded profile config "old-k8s-version-490121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0307 19:30:26.331073  762831 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 19:30:26.332939  762831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:30:26.354981  762831 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 19:30:26.355101  762831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 19:30:26.425683  762831 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 19:30:26.415773364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 19:30:26.425793  762831 docker.go:295] overlay module found
	I0307 19:30:26.428005  762831 out.go:177] * Using the docker driver based on existing profile
	I0307 19:30:26.430936  762831 start.go:297] selected driver: docker
	I0307 19:30:26.430956  762831 start.go:901] validating driver "docker" against &{Name:old-k8s-version-490121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490121 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:30:26.431065  762831 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:30:26.431748  762831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 19:30:26.493672  762831 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 19:30:26.484540586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 19:30:26.494024  762831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:30:26.494087  762831 cni.go:84] Creating CNI manager for ""
	I0307 19:30:26.494104  762831 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 19:30:26.494163  762831 start.go:340] cluster config:
	{Name:old-k8s-version-490121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:30:26.496588  762831 out.go:177] * Starting "old-k8s-version-490121" primary control-plane node in "old-k8s-version-490121" cluster
	I0307 19:30:26.498341  762831 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 19:30:26.499986  762831 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 19:30:26.501685  762831 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 19:30:26.501744  762831 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0307 19:30:26.501758  762831 cache.go:56] Caching tarball of preloaded images
	I0307 19:30:26.501769  762831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 19:30:26.501838  762831 preload.go:173] Found /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:30:26.501848  762831 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0307 19:30:26.501968  762831 profile.go:142] Saving config to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/config.json ...
	I0307 19:30:26.519503  762831 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0307 19:30:26.519527  762831 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0307 19:30:26.519565  762831 cache.go:194] Successfully downloaded all kic artifacts
	I0307 19:30:26.519594  762831 start.go:360] acquireMachinesLock for old-k8s-version-490121: {Name:mk7022b7081856386136ba228ad21410889c3d49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:30:26.519665  762831 start.go:364] duration metric: took 43.7µs to acquireMachinesLock for "old-k8s-version-490121"
	I0307 19:30:26.519685  762831 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:30:26.519690  762831 fix.go:54] fixHost starting: 
	I0307 19:30:26.519975  762831 cli_runner.go:164] Run: docker container inspect old-k8s-version-490121 --format={{.State.Status}}
	I0307 19:30:26.535978  762831 fix.go:112] recreateIfNeeded on old-k8s-version-490121: state=Stopped err=<nil>
	W0307 19:30:26.536016  762831 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:30:26.538448  762831 out.go:177] * Restarting existing docker container for "old-k8s-version-490121" ...
	I0307 19:30:26.540252  762831 cli_runner.go:164] Run: docker start old-k8s-version-490121
	I0307 19:30:26.841652  762831 cli_runner.go:164] Run: docker container inspect old-k8s-version-490121 --format={{.State.Status}}
	I0307 19:30:26.864693  762831 kic.go:430] container "old-k8s-version-490121" state is running.
	I0307 19:30:26.865103  762831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-490121
	I0307 19:30:26.891392  762831 profile.go:142] Saving config to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/config.json ...
	I0307 19:30:26.891640  762831 machine.go:94] provisionDockerMachine start ...
	I0307 19:30:26.891706  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:26.912591  762831 main.go:141] libmachine: Using SSH client type: native
	I0307 19:30:26.912863  762831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33808 <nil> <nil>}
	I0307 19:30:26.912873  762831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 19:30:26.913483  762831 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36758->127.0.0.1:33808: read: connection reset by peer
	I0307 19:30:30.050086  762831 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-490121
	
	I0307 19:30:30.050129  762831 ubuntu.go:169] provisioning hostname "old-k8s-version-490121"
	I0307 19:30:30.050205  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:30.072173  762831 main.go:141] libmachine: Using SSH client type: native
	I0307 19:30:30.072464  762831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33808 <nil> <nil>}
	I0307 19:30:30.072484  762831 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-490121 && echo "old-k8s-version-490121" | sudo tee /etc/hostname
	I0307 19:30:30.224347  762831 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-490121
	
	I0307 19:30:30.224434  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:30.243257  762831 main.go:141] libmachine: Using SSH client type: native
	I0307 19:30:30.243518  762831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33808 <nil> <nil>}
	I0307 19:30:30.243542  762831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-490121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-490121/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-490121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 19:30:30.380423  762831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 19:30:30.380451  762831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18239-558171/.minikube CaCertPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18239-558171/.minikube}
	I0307 19:30:30.380472  762831 ubuntu.go:177] setting up certificates
	I0307 19:30:30.380481  762831 provision.go:84] configureAuth start
	I0307 19:30:30.380541  762831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-490121
	I0307 19:30:30.408378  762831 provision.go:143] copyHostCerts
	I0307 19:30:30.408446  762831 exec_runner.go:144] found /home/jenkins/minikube-integration/18239-558171/.minikube/ca.pem, removing ...
	I0307 19:30:30.408460  762831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18239-558171/.minikube/ca.pem
	I0307 19:30:30.408540  762831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18239-558171/.minikube/ca.pem (1082 bytes)
	I0307 19:30:30.408687  762831 exec_runner.go:144] found /home/jenkins/minikube-integration/18239-558171/.minikube/cert.pem, removing ...
	I0307 19:30:30.408693  762831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18239-558171/.minikube/cert.pem
	I0307 19:30:30.408721  762831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18239-558171/.minikube/cert.pem (1123 bytes)
	I0307 19:30:30.408787  762831 exec_runner.go:144] found /home/jenkins/minikube-integration/18239-558171/.minikube/key.pem, removing ...
	I0307 19:30:30.408792  762831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18239-558171/.minikube/key.pem
	I0307 19:30:30.408817  762831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18239-558171/.minikube/key.pem (1675 bytes)
	I0307 19:30:30.408869  762831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18239-558171/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-490121 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-490121]
	I0307 19:30:31.200107  762831 provision.go:177] copyRemoteCerts
	I0307 19:30:31.200192  762831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 19:30:31.200255  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:31.229660  762831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/old-k8s-version-490121/id_rsa Username:docker}
	I0307 19:30:31.326822  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 19:30:31.356281  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0307 19:30:31.391623  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 19:30:31.424440  762831 provision.go:87] duration metric: took 1.043933521s to configureAuth
	I0307 19:30:31.424518  762831 ubuntu.go:193] setting minikube options for container-runtime
	I0307 19:30:31.424778  762831 config.go:182] Loaded profile config "old-k8s-version-490121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0307 19:30:31.424809  762831 machine.go:97] duration metric: took 4.533152343s to provisionDockerMachine
	I0307 19:30:31.424840  762831 start.go:293] postStartSetup for "old-k8s-version-490121" (driver="docker")
	I0307 19:30:31.424866  762831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 19:30:31.424978  762831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 19:30:31.425051  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:31.453213  762831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/old-k8s-version-490121/id_rsa Username:docker}
	I0307 19:30:31.552317  762831 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 19:30:31.557096  762831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 19:30:31.557129  762831 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 19:30:31.557138  762831 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 19:30:31.557146  762831 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0307 19:30:31.557156  762831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18239-558171/.minikube/addons for local assets ...
	I0307 19:30:31.557213  762831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18239-558171/.minikube/files for local assets ...
	I0307 19:30:31.557290  762831 filesync.go:149] local asset: /home/jenkins/minikube-integration/18239-558171/.minikube/files/etc/ssl/certs/5635812.pem -> 5635812.pem in /etc/ssl/certs
	I0307 19:30:31.557400  762831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 19:30:31.566574  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/files/etc/ssl/certs/5635812.pem --> /etc/ssl/certs/5635812.pem (1708 bytes)
	I0307 19:30:31.602504  762831 start.go:296] duration metric: took 177.635747ms for postStartSetup
	I0307 19:30:31.602585  762831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 19:30:31.602639  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:31.621032  762831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/old-k8s-version-490121/id_rsa Username:docker}
	I0307 19:30:31.711600  762831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 19:30:31.716908  762831 fix.go:56] duration metric: took 5.197210488s for fixHost
	I0307 19:30:31.716939  762831 start.go:83] releasing machines lock for "old-k8s-version-490121", held for 5.197266299s
	I0307 19:30:31.717009  762831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-490121
	I0307 19:30:31.762500  762831 ssh_runner.go:195] Run: cat /version.json
	I0307 19:30:31.762563  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:31.762797  762831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 19:30:31.762860  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:31.788634  762831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/old-k8s-version-490121/id_rsa Username:docker}
	I0307 19:30:31.813455  762831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/old-k8s-version-490121/id_rsa Username:docker}
	I0307 19:30:31.886466  762831 ssh_runner.go:195] Run: systemctl --version
	I0307 19:30:32.054630  762831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 19:30:32.061076  762831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 19:30:32.099255  762831 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 19:30:32.099340  762831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 19:30:32.120116  762831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0307 19:30:32.120139  762831 start.go:494] detecting cgroup driver to use...
	I0307 19:30:32.120173  762831 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 19:30:32.120232  762831 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 19:30:32.150974  762831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 19:30:32.175790  762831 docker.go:217] disabling cri-docker service (if available) ...
	I0307 19:30:32.175849  762831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0307 19:30:32.195370  762831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0307 19:30:32.218055  762831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0307 19:30:32.339339  762831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0307 19:30:32.469084  762831 docker.go:233] disabling docker service ...
	I0307 19:30:32.469205  762831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0307 19:30:32.489612  762831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0307 19:30:32.516989  762831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0307 19:30:32.647378  762831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0307 19:30:32.775912  762831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0307 19:30:32.788961  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 19:30:32.807628  762831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0307 19:30:32.819629  762831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 19:30:32.830436  762831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 19:30:32.830567  762831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 19:30:32.841963  762831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:30:32.854833  762831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 19:30:32.864735  762831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:30:32.878904  762831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 19:30:32.889406  762831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 19:30:32.900317  762831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 19:30:32.910072  762831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 19:30:32.918843  762831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:30:33.021620  762831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 19:30:33.211462  762831 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0307 19:30:33.211531  762831 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 19:30:33.215635  762831 start.go:562] Will wait 60s for crictl version
	I0307 19:30:33.215748  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:30:33.220134  762831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 19:30:33.311969  762831 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0307 19:30:33.312031  762831 ssh_runner.go:195] Run: containerd --version
	I0307 19:30:33.356212  762831 ssh_runner.go:195] Run: containerd --version
	I0307 19:30:33.406791  762831 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0307 19:30:33.411626  762831 cli_runner.go:164] Run: docker network inspect old-k8s-version-490121 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 19:30:33.440324  762831 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0307 19:30:33.444244  762831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 19:30:33.463033  762831 kubeadm.go:877] updating cluster {Name:old-k8s-version-490121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490121 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 19:30:33.463159  762831 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 19:30:33.463234  762831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 19:30:33.527045  762831 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 19:30:33.527076  762831 containerd.go:519] Images already preloaded, skipping extraction
	I0307 19:30:33.527137  762831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 19:30:33.600715  762831 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 19:30:33.600741  762831 cache_images.go:84] Images are preloaded, skipping loading
	I0307 19:30:33.600750  762831 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0307 19:30:33.600869  762831 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-490121 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 19:30:33.600940  762831 ssh_runner.go:195] Run: sudo crictl info
	I0307 19:30:33.647294  762831 cni.go:84] Creating CNI manager for ""
	I0307 19:30:33.647321  762831 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 19:30:33.647331  762831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 19:30:33.647388  762831 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-490121 NodeName:old-k8s-version-490121 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0307 19:30:33.647587  762831 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-490121"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 19:30:33.647695  762831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0307 19:30:33.659871  762831 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 19:30:33.659944  762831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 19:30:33.669457  762831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0307 19:30:33.690822  762831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 19:30:33.716800  762831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0307 19:30:33.741018  762831 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0307 19:30:33.757593  762831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 19:30:33.777426  762831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:30:33.874617  762831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:30:33.891273  762831 certs.go:68] Setting up /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121 for IP: 192.168.85.2
	I0307 19:30:33.891298  762831 certs.go:194] generating shared ca certs ...
	I0307 19:30:33.891315  762831 certs.go:226] acquiring lock for ca certs: {Name:mke14792b1616e9503645c7147aed38043ea5d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:30:33.891449  762831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18239-558171/.minikube/ca.key
	I0307 19:30:33.891505  762831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.key
	I0307 19:30:33.891516  762831 certs.go:256] generating profile certs ...
	I0307 19:30:33.891607  762831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.key
	I0307 19:30:33.891674  762831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/apiserver.key.6fe806f2
	I0307 19:30:33.891717  762831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/proxy-client.key
	I0307 19:30:33.891823  762831 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/563581.pem (1338 bytes)
	W0307 19:30:33.891860  762831 certs.go:480] ignoring /home/jenkins/minikube-integration/18239-558171/.minikube/certs/563581_empty.pem, impossibly tiny 0 bytes
	I0307 19:30:33.891873  762831 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 19:30:33.891896  762831 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem (1082 bytes)
	I0307 19:30:33.891921  762831 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/cert.pem (1123 bytes)
	I0307 19:30:33.891945  762831 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/key.pem (1675 bytes)
	I0307 19:30:33.891990  762831 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/files/etc/ssl/certs/5635812.pem (1708 bytes)
	I0307 19:30:33.892618  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 19:30:33.919224  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 19:30:33.945666  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 19:30:33.972650  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 19:30:34.002531  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 19:30:34.037093  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 19:30:34.063237  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 19:30:34.088859  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 19:30:34.113766  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/certs/563581.pem --> /usr/share/ca-certificates/563581.pem (1338 bytes)
	I0307 19:30:34.137268  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/files/etc/ssl/certs/5635812.pem --> /usr/share/ca-certificates/5635812.pem (1708 bytes)
	I0307 19:30:34.161235  762831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 19:30:34.185068  762831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 19:30:34.204155  762831 ssh_runner.go:195] Run: openssl version
	I0307 19:30:34.209838  762831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/563581.pem && ln -fs /usr/share/ca-certificates/563581.pem /etc/ssl/certs/563581.pem"
	I0307 19:30:34.219223  762831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/563581.pem
	I0307 19:30:34.222821  762831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:50 /usr/share/ca-certificates/563581.pem
	I0307 19:30:34.222891  762831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/563581.pem
	I0307 19:30:34.229956  762831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/563581.pem /etc/ssl/certs/51391683.0"
	I0307 19:30:34.238920  762831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5635812.pem && ln -fs /usr/share/ca-certificates/5635812.pem /etc/ssl/certs/5635812.pem"
	I0307 19:30:34.248401  762831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5635812.pem
	I0307 19:30:34.252130  762831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:50 /usr/share/ca-certificates/5635812.pem
	I0307 19:30:34.252234  762831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5635812.pem
	I0307 19:30:34.259310  762831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5635812.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 19:30:34.268499  762831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 19:30:34.277733  762831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:30:34.281018  762831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:30:34.281080  762831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:30:34.288488  762831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 19:30:34.297236  762831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 19:30:34.300722  762831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 19:30:34.307415  762831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 19:30:34.314082  762831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 19:30:34.320828  762831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 19:30:34.328193  762831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 19:30:34.335113  762831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 19:30:34.341908  762831 kubeadm.go:391] StartCluster: {Name:old-k8s-version-490121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490121 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:30:34.342018  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 19:30:34.342097  762831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 19:30:34.379172  762831 cri.go:89] found id: "f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384"
	I0307 19:30:34.379199  762831 cri.go:89] found id: "d69ea4ee77885ba5ed1fabc9e7171fddcad8c266310d6312c0d16e03ed43c9af"
	I0307 19:30:34.379204  762831 cri.go:89] found id: "5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907"
	I0307 19:30:34.379207  762831 cri.go:89] found id: "1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29"
	I0307 19:30:34.379210  762831 cri.go:89] found id: "99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939"
	I0307 19:30:34.379213  762831 cri.go:89] found id: "156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4"
	I0307 19:30:34.379216  762831 cri.go:89] found id: "f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08"
	I0307 19:30:34.379255  762831 cri.go:89] found id: "05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190"
	I0307 19:30:34.379266  762831 cri.go:89] found id: ""
	I0307 19:30:34.379334  762831 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0307 19:30:34.391630  762831 cri.go:116] JSON = null
	W0307 19:30:34.391698  762831 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0307 19:30:34.391778  762831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 19:30:34.400538  762831 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 19:30:34.400560  762831 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 19:30:34.400566  762831 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 19:30:34.400628  762831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 19:30:34.408837  762831 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 19:30:34.409433  762831 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-490121" does not appear in /home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 19:30:34.409711  762831 kubeconfig.go:62] /home/jenkins/minikube-integration/18239-558171/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-490121" cluster setting kubeconfig missing "old-k8s-version-490121" context setting]
	I0307 19:30:34.410238  762831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/kubeconfig: {Name:mk6862a934ece36327360ff645a33ee6e04a2f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:30:34.411577  762831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 19:30:34.420500  762831 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0307 19:30:34.420533  762831 kubeadm.go:591] duration metric: took 19.962332ms to restartPrimaryControlPlane
	I0307 19:30:34.420543  762831 kubeadm.go:393] duration metric: took 78.645078ms to StartCluster
	I0307 19:30:34.420558  762831 settings.go:142] acquiring lock: {Name:mkebfa804b6349436c6d99572f0f0da9cb5ad1f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:30:34.420614  762831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 19:30:34.421497  762831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/kubeconfig: {Name:mk6862a934ece36327360ff645a33ee6e04a2f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:30:34.421725  762831 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 19:30:34.425015  762831 out.go:177] * Verifying Kubernetes components...
	I0307 19:30:34.422035  762831 config.go:182] Loaded profile config "old-k8s-version-490121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0307 19:30:34.422059  762831 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 19:30:34.427126  762831 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-490121"
	I0307 19:30:34.427163  762831 addons.go:69] Setting dashboard=true in profile "old-k8s-version-490121"
	I0307 19:30:34.427222  762831 addons.go:234] Setting addon dashboard=true in "old-k8s-version-490121"
	W0307 19:30:34.427263  762831 addons.go:243] addon dashboard should already be in state true
	I0307 19:30:34.427169  762831 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-490121"
	W0307 19:30:34.427307  762831 addons.go:243] addon storage-provisioner should already be in state true
	I0307 19:30:34.427329  762831 host.go:66] Checking if "old-k8s-version-490121" exists ...
	I0307 19:30:34.427333  762831 host.go:66] Checking if "old-k8s-version-490121" exists ...
	I0307 19:30:34.427791  762831 cli_runner.go:164] Run: docker container inspect old-k8s-version-490121 --format={{.State.Status}}
	I0307 19:30:34.427850  762831 cli_runner.go:164] Run: docker container inspect old-k8s-version-490121 --format={{.State.Status}}
	I0307 19:30:34.427175  762831 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-490121"
	I0307 19:30:34.431603  762831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-490121"
	I0307 19:30:34.427236  762831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:30:34.427184  762831 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-490121"
	I0307 19:30:34.431958  762831 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-490121"
	W0307 19:30:34.432045  762831 addons.go:243] addon metrics-server should already be in state true
	I0307 19:30:34.432081  762831 cli_runner.go:164] Run: docker container inspect old-k8s-version-490121 --format={{.State.Status}}
	I0307 19:30:34.432112  762831 host.go:66] Checking if "old-k8s-version-490121" exists ...
	I0307 19:30:34.432615  762831 cli_runner.go:164] Run: docker container inspect old-k8s-version-490121 --format={{.State.Status}}
	I0307 19:30:34.460631  762831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:30:34.462639  762831 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:30:34.462654  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 19:30:34.462731  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:34.485040  762831 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0307 19:30:34.484046  762831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/old-k8s-version-490121/id_rsa Username:docker}
	I0307 19:30:34.493699  762831 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0307 19:30:34.499922  762831 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0307 19:30:34.499952  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0307 19:30:34.500027  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:34.536624  762831 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-490121"
	W0307 19:30:34.536647  762831 addons.go:243] addon default-storageclass should already be in state true
	I0307 19:30:34.536672  762831 host.go:66] Checking if "old-k8s-version-490121" exists ...
	I0307 19:30:34.537092  762831 cli_runner.go:164] Run: docker container inspect old-k8s-version-490121 --format={{.State.Status}}
	I0307 19:30:34.540041  762831 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0307 19:30:34.542182  762831 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 19:30:34.540003  762831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/old-k8s-version-490121/id_rsa Username:docker}
	I0307 19:30:34.542542  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 19:30:34.542599  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:34.574951  762831 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 19:30:34.574972  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 19:30:34.575036  762831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-490121
	I0307 19:30:34.590718  762831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/old-k8s-version-490121/id_rsa Username:docker}
	I0307 19:30:34.613972  762831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/old-k8s-version-490121/id_rsa Username:docker}
	I0307 19:30:34.626191  762831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:30:34.647083  762831 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-490121" to be "Ready" ...
	I0307 19:30:34.668612  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:30:34.733023  762831 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 19:30:34.733087  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0307 19:30:34.755646  762831 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0307 19:30:34.755712  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0307 19:30:34.770774  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0307 19:30:34.784847  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:34.784925  762831 retry.go:31] will retry after 284.271879ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:34.786673  762831 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 19:30:34.786730  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 19:30:34.802013  762831 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0307 19:30:34.802091  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0307 19:30:34.830230  762831 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 19:30:34.830293  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 19:30:34.838300  762831 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0307 19:30:34.838372  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0307 19:30:34.864520  762831 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0307 19:30:34.864591  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0307 19:30:34.877838  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 19:30:34.896591  762831 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0307 19:30:34.896661  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0307 19:30:34.910036  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:34.910135  762831 retry.go:31] will retry after 309.047225ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:34.920660  762831 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0307 19:30:34.920731  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0307 19:30:34.940244  762831 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0307 19:30:34.940318  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0307 19:30:34.959745  762831 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0307 19:30:34.959818  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0307 19:30:34.978844  762831 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 19:30:34.978917  762831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0307 19:30:34.987561  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:34.987635  762831 retry.go:31] will retry after 311.068002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:34.999407  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 19:30:35.069680  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 19:30:35.077051  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.077167  762831 retry.go:31] will retry after 259.227355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 19:30:35.145333  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.145419  762831 retry.go:31] will retry after 529.510369ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.220251  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 19:30:35.296798  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.296831  762831 retry.go:31] will retry after 298.117273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.299083  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 19:30:35.337231  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 19:30:35.385891  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.385929  762831 retry.go:31] will retry after 453.619543ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 19:30:35.420318  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.420352  762831 retry.go:31] will retry after 395.650002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.595668  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 19:30:35.669136  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.669168  762831 retry.go:31] will retry after 794.401064ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.675465  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 19:30:35.756359  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.756396  762831 retry.go:31] will retry after 650.068344ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.816606  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 19:30:35.840100  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 19:30:35.900042  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.900092  762831 retry.go:31] will retry after 500.476996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 19:30:35.929733  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:35.929771  762831 retry.go:31] will retry after 746.404218ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:36.401199  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 19:30:36.407568  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:30:36.464333  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 19:30:36.497995  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:36.498073  762831 retry.go:31] will retry after 1.041451192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 19:30:36.525942  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:36.526015  762831 retry.go:31] will retry after 1.042231632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 19:30:36.567190  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:36.567224  762831 retry.go:31] will retry after 879.183244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:36.647799  762831 node_ready.go:53] error getting node "old-k8s-version-490121": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-490121": dial tcp 192.168.85.2:8443: connect: connection refused
	I0307 19:30:36.677162  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 19:30:36.746359  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:36.746389  762831 retry.go:31] will retry after 1.136746297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:37.446727  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 19:30:37.517344  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:37.517381  762831 retry.go:31] will retry after 1.890351475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:37.540701  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 19:30:37.569151  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 19:30:37.635723  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:37.635756  762831 retry.go:31] will retry after 1.253904816s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 19:30:37.659976  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:37.660012  762831 retry.go:31] will retry after 1.579356746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:37.883379  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 19:30:37.960038  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:37.960077  762831 retry.go:31] will retry after 1.014782741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:38.890127  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 19:30:38.963617  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:38.963655  762831 retry.go:31] will retry after 1.977388025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:38.975852  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 19:30:39.070363  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:39.070409  762831 retry.go:31] will retry after 1.693213296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:39.147924  762831 node_ready.go:53] error getting node "old-k8s-version-490121": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-490121": dial tcp 192.168.85.2:8443: connect: connection refused
	I0307 19:30:39.240327  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 19:30:39.310809  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:39.310842  762831 retry.go:31] will retry after 2.63520914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:39.407993  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 19:30:39.477145  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:39.477178  762831 retry.go:31] will retry after 2.56162971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:40.763848  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 19:30:40.835373  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:40.835416  762831 retry.go:31] will retry after 1.575887995s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:40.941863  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 19:30:41.011738  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:41.011779  762831 retry.go:31] will retry after 3.868888502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:41.148338  762831 node_ready.go:53] error getting node "old-k8s-version-490121": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-490121": dial tcp 192.168.85.2:8443: connect: connection refused
	I0307 19:30:41.946836  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 19:30:42.030981  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:42.031016  762831 retry.go:31] will retry after 2.877833945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:42.039154  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 19:30:42.126847  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:42.126891  762831 retry.go:31] will retry after 2.634958605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:42.411984  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 19:30:42.491878  762831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:42.491914  762831 retry.go:31] will retry after 3.673361459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 19:30:44.762992  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0307 19:30:44.881271  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 19:30:44.909286  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:30:46.166446  762831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 19:30:51.291815  762831 node_ready.go:49] node "old-k8s-version-490121" has status "Ready":"True"
	I0307 19:30:51.291837  762831 node_ready.go:38] duration metric: took 16.644720969s for node "old-k8s-version-490121" to be "Ready" ...
	I0307 19:30:51.291846  762831 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 19:30:51.547842  762831 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-mskrg" in "kube-system" namespace to be "Ready" ...
	I0307 19:30:51.913425  762831 pod_ready.go:92] pod "coredns-74ff55c5b-mskrg" in "kube-system" namespace has status "Ready":"True"
	I0307 19:30:51.913493  762831 pod_ready.go:81] duration metric: took 365.551333ms for pod "coredns-74ff55c5b-mskrg" in "kube-system" namespace to be "Ready" ...
	I0307 19:30:51.913541  762831 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-490121" in "kube-system" namespace to be "Ready" ...
	I0307 19:30:52.071583  762831 pod_ready.go:92] pod "etcd-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"True"
	I0307 19:30:52.071655  762831 pod_ready.go:81] duration metric: took 158.08547ms for pod "etcd-old-k8s-version-490121" in "kube-system" namespace to be "Ready" ...
	I0307 19:30:52.071684  762831 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-490121" in "kube-system" namespace to be "Ready" ...
	I0307 19:30:53.171170  762831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.408120686s)
	I0307 19:30:54.097224  762831 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:30:54.098051  762831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.216739018s)
	I0307 19:30:54.100447  762831 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-490121 addons enable metrics-server
	
	I0307 19:30:54.098317  762831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.188984252s)
	I0307 19:30:54.098378  762831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.931899572s)
	I0307 19:30:54.102431  762831 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-490121"
	I0307 19:30:54.104584  762831 out.go:177] * Enabled addons: default-storageclass, dashboard, storage-provisioner, metrics-server
	I0307 19:30:54.106656  762831 addons.go:505] duration metric: took 19.684590078s for enable addons: enabled=[default-storageclass dashboard storage-provisioner metrics-server]
	I0307 19:30:56.581498  762831 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:30:59.079158  762831 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:00.081733  762831 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"True"
	I0307 19:31:00.081780  762831 pod_ready.go:81] duration metric: took 8.010068986s for pod "kube-apiserver-old-k8s-version-490121" in "kube-system" namespace to be "Ready" ...
	I0307 19:31:00.081794  762831 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace to be "Ready" ...
	I0307 19:31:02.090160  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:04.588080  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:06.588223  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:08.588823  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:11.088492  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:13.089228  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:15.092115  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:17.093454  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:19.094067  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:21.588252  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:24.088898  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:26.589006  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:29.087601  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:31.089043  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:33.588096  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:35.589059  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:38.088700  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:40.089283  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:42.090608  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:44.588990  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:47.089266  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:49.588650  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:52.087849  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:54.088593  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:56.588309  762831 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:31:57.587908  762831 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"True"
	I0307 19:31:57.587936  762831 pod_ready.go:81] duration metric: took 57.506133731s for pod "kube-controller-manager-old-k8s-version-490121" in "kube-system" namespace to be "Ready" ...
	I0307 19:31:57.587949  762831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5rbpn" in "kube-system" namespace to be "Ready" ...
	I0307 19:31:57.592898  762831 pod_ready.go:92] pod "kube-proxy-5rbpn" in "kube-system" namespace has status "Ready":"True"
	I0307 19:31:57.592925  762831 pod_ready.go:81] duration metric: took 4.96872ms for pod "kube-proxy-5rbpn" in "kube-system" namespace to be "Ready" ...
	I0307 19:31:57.592936  762831 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-490121" in "kube-system" namespace to be "Ready" ...
	I0307 19:31:59.607148  762831 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:02.098837  762831 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:04.099378  762831 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:06.100007  762831 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:08.599104  762831 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:10.599654  762831 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-490121" in "kube-system" namespace has status "Ready":"True"
	I0307 19:32:10.599678  762831 pod_ready.go:81] duration metric: took 13.006734482s for pod "kube-scheduler-old-k8s-version-490121" in "kube-system" namespace to be "Ready" ...
	I0307 19:32:10.599690  762831 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace to be "Ready" ...
	I0307 19:32:12.605589  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:14.606100  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:16.606568  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:19.106001  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:21.106162  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:23.106502  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:25.106995  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:27.606264  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:29.606926  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:32.106769  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:34.606166  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:37.107256  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:39.112526  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:41.605029  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:43.621991  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:46.105021  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:48.106406  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:50.107615  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:52.605407  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:54.605985  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:57.106869  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:32:59.107540  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:01.606304  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:04.106187  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:06.606584  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:09.124646  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:11.605664  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:13.605938  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:15.606165  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:18.106309  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:20.106544  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:22.605997  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:24.606936  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:27.107041  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:29.111031  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:31.606389  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:33.611702  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:36.106153  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:38.607187  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:41.105924  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:43.110415  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:45.605585  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:47.606012  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:49.606708  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:52.106231  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:54.106306  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:56.605836  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:33:58.606251  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:00.607403  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:03.105634  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:05.107060  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:07.606150  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:09.607765  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:12.107152  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:14.606558  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:17.106973  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:19.110832  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:21.606267  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:24.106381  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:26.605921  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:28.606401  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:31.107043  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:33.606534  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:36.106511  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:38.605158  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:40.605737  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:42.607136  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:45.109023  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:47.606200  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:49.608128  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:52.106697  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:54.605489  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:57.105694  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:34:59.109093  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:01.113425  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:03.606877  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:06.107170  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:08.605434  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:10.606286  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:12.608255  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:15.107192  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:17.115034  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:19.606945  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:22.106382  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:24.606733  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:26.607608  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:29.110754  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:31.606730  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:33.656753  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:36.129051  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:38.606994  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:41.105989  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:43.106627  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:45.115823  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:47.609214  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:50.109551  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:52.606769  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:54.611278  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:57.107679  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:59.110902  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:01.134439  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:03.606584  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:05.614979  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:08.112034  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:10.606231  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:10.606263  762831 pod_ready.go:81] duration metric: took 4m0.006564846s for pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace to be "Ready" ...
	E0307 19:36:10.606274  762831 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0307 19:36:10.606283  762831 pod_ready.go:38] duration metric: took 5m19.314418834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 19:36:10.606296  762831 api_server.go:52] waiting for apiserver process to appear ...
	I0307 19:36:10.606341  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 19:36:10.606408  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 19:36:10.656033  762831 cri.go:89] found id: "ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020"
	I0307 19:36:10.656094  762831 cri.go:89] found id: "99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939"
	I0307 19:36:10.656106  762831 cri.go:89] found id: ""
	I0307 19:36:10.656114  762831 logs.go:276] 2 containers: [ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020 99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939]
	I0307 19:36:10.656170  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.659898  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.663752  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 19:36:10.663849  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 19:36:10.711143  762831 cri.go:89] found id: "5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d"
	I0307 19:36:10.711195  762831 cri.go:89] found id: "05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190"
	I0307 19:36:10.711214  762831 cri.go:89] found id: ""
	I0307 19:36:10.711239  762831 logs.go:276] 2 containers: [5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d 05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190]
	I0307 19:36:10.711326  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.715686  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.719890  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 19:36:10.720008  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 19:36:10.782775  762831 cri.go:89] found id: "9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b"
	I0307 19:36:10.782850  762831 cri.go:89] found id: "f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384"
	I0307 19:36:10.782869  762831 cri.go:89] found id: ""
	I0307 19:36:10.782892  762831 logs.go:276] 2 containers: [9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384]
	I0307 19:36:10.782980  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.787225  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.791833  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 19:36:10.791972  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 19:36:10.850718  762831 cri.go:89] found id: "62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f"
	I0307 19:36:10.850794  762831 cri.go:89] found id: "156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4"
	I0307 19:36:10.850813  762831 cri.go:89] found id: ""
	I0307 19:36:10.850837  762831 logs.go:276] 2 containers: [62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f 156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4]
	I0307 19:36:10.850923  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.854985  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.858671  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 19:36:10.858755  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 19:36:10.909124  762831 cri.go:89] found id: "eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5"
	I0307 19:36:10.909149  762831 cri.go:89] found id: "5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907"
	I0307 19:36:10.909155  762831 cri.go:89] found id: ""
	I0307 19:36:10.909162  762831 logs.go:276] 2 containers: [eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5 5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907]
	I0307 19:36:10.909233  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.913385  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.917239  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 19:36:10.917311  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 19:36:10.962456  762831 cri.go:89] found id: "2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916"
	I0307 19:36:10.962475  762831 cri.go:89] found id: "f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08"
	I0307 19:36:10.962480  762831 cri.go:89] found id: ""
	I0307 19:36:10.962487  762831 logs.go:276] 2 containers: [2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916 f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08]
	I0307 19:36:10.962547  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.966970  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.970855  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 19:36:10.970976  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 19:36:11.016102  762831 cri.go:89] found id: "18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57"
	I0307 19:36:11.016128  762831 cri.go:89] found id: "1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29"
	I0307 19:36:11.016134  762831 cri.go:89] found id: ""
	I0307 19:36:11.016141  762831 logs.go:276] 2 containers: [18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57 1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29]
	I0307 19:36:11.016221  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:11.019977  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:11.023680  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 19:36:11.023815  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 19:36:11.062030  762831 cri.go:89] found id: "ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa"
	I0307 19:36:11.062103  762831 cri.go:89] found id: "f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093"
	I0307 19:36:11.062141  762831 cri.go:89] found id: ""
	I0307 19:36:11.062151  762831 logs.go:276] 2 containers: [ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093]
	I0307 19:36:11.062215  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:11.066359  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:11.070205  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 19:36:11.070279  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 19:36:11.109710  762831 cri.go:89] found id: "90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a"
	I0307 19:36:11.109783  762831 cri.go:89] found id: ""
	I0307 19:36:11.109813  762831 logs.go:276] 1 containers: [90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a]
	I0307 19:36:11.109911  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:11.114323  762831 logs.go:123] Gathering logs for kube-apiserver [ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020] ...
	I0307 19:36:11.114404  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020"
	I0307 19:36:11.176440  762831 logs.go:123] Gathering logs for etcd [5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d] ...
	I0307 19:36:11.176475  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d"
	I0307 19:36:11.230993  762831 logs.go:123] Gathering logs for kube-scheduler [62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f] ...
	I0307 19:36:11.231024  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f"
	I0307 19:36:11.300327  762831 logs.go:123] Gathering logs for kube-proxy [eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5] ...
	I0307 19:36:11.300359  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5"
	I0307 19:36:11.354977  762831 logs.go:123] Gathering logs for storage-provisioner [f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093] ...
	I0307 19:36:11.355007  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093"
	I0307 19:36:11.398803  762831 logs.go:123] Gathering logs for containerd ...
	I0307 19:36:11.398834  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 19:36:11.458507  762831 logs.go:123] Gathering logs for kube-controller-manager [f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08] ...
	I0307 19:36:11.458543  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08"
	I0307 19:36:11.532204  762831 logs.go:123] Gathering logs for kubernetes-dashboard [90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a] ...
	I0307 19:36:11.535311  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a"
	I0307 19:36:11.601998  762831 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:36:11.602025  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:36:11.766962  762831 logs.go:123] Gathering logs for kube-apiserver [99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939] ...
	I0307 19:36:11.766998  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939"
	I0307 19:36:11.869356  762831 logs.go:123] Gathering logs for coredns [9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b] ...
	I0307 19:36:11.869391  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b"
	I0307 19:36:11.909582  762831 logs.go:123] Gathering logs for coredns [f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384] ...
	I0307 19:36:11.909612  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384"
	I0307 19:36:11.955107  762831 logs.go:123] Gathering logs for kube-proxy [5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907] ...
	I0307 19:36:11.955137  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907"
	I0307 19:36:11.995979  762831 logs.go:123] Gathering logs for kube-controller-manager [2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916] ...
	I0307 19:36:11.996008  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916"
	I0307 19:36:12.056706  762831 logs.go:123] Gathering logs for dmesg ...
	I0307 19:36:12.056737  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:36:12.076984  762831 logs.go:123] Gathering logs for kindnet [18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57] ...
	I0307 19:36:12.077015  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57"
	I0307 19:36:12.120010  762831 logs.go:123] Gathering logs for container status ...
	I0307 19:36:12.120041  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:36:12.163174  762831 logs.go:123] Gathering logs for kubelet ...
	I0307 19:36:12.163205  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 19:36:12.218431  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562011     662 reflector.go:138] object-"kube-system"/"kube-proxy-token-vdtz2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vdtz2" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.218682  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562119     662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.218920  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562202     662 reflector.go:138] object-"kube-system"/"coredns-token-zckzk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zckzk" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.219145  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562259     662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.220131  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562331     662 reflector.go:138] object-"kube-system"/"kindnet-token-g7knv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g7knv" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.220392  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562675     662 reflector.go:138] object-"default"/"default-token-rgss9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rgss9" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.220637  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562749     662 reflector.go:138] object-"kube-system"/"metrics-server-token-t79rr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t79rr" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.220886  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.672443     662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-k4nsq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-k4nsq" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.233667  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:54 old-k8s-version-490121 kubelet[662]: E0307 19:30:54.291729     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:12.233913  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:54 old-k8s-version-490121 kubelet[662]: E0307 19:30:54.749946     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.236687  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:07 old-k8s-version-490121 kubelet[662]: E0307 19:31:07.413698     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:12.238386  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:20 old-k8s-version-490121 kubelet[662]: E0307 19:31:20.430619     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.239318  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:23 old-k8s-version-490121 kubelet[662]: E0307 19:31:23.898812     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.239668  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:24 old-k8s-version-490121 kubelet[662]: E0307 19:31:24.894861     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.240124  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:25 old-k8s-version-490121 kubelet[662]: E0307 19:31:25.899230     662 pod_workers.go:191] Error syncing pod 57707985-e3ad-463c-a7a9-150bfe271af7 ("storage-provisioner_kube-system(57707985-e3ad-463c-a7a9-150bfe271af7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(57707985-e3ad-463c-a7a9-150bfe271af7)"
	W0307 19:36:12.242571  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:31 old-k8s-version-490121 kubelet[662]: E0307 19:31:31.411639     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:12.242927  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:32 old-k8s-version-490121 kubelet[662]: E0307 19:31:32.839916     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.243723  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:43 old-k8s-version-490121 kubelet[662]: E0307 19:31:43.406752     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.244204  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:43 old-k8s-version-490121 kubelet[662]: E0307 19:31:43.954053     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.244574  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:52 old-k8s-version-490121 kubelet[662]: E0307 19:31:52.840518     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.244771  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:57 old-k8s-version-490121 kubelet[662]: E0307 19:31:57.404922     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.245369  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:08 old-k8s-version-490121 kubelet[662]: E0307 19:32:08.021627     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.249632  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:08 old-k8s-version-490121 kubelet[662]: E0307 19:32:08.404313     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.249991  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:12 old-k8s-version-490121 kubelet[662]: E0307 19:32:12.840494     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.252464  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:19 old-k8s-version-490121 kubelet[662]: E0307 19:32:19.412896     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:12.252795  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:25 old-k8s-version-490121 kubelet[662]: E0307 19:32:25.404097     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.252980  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:34 old-k8s-version-490121 kubelet[662]: E0307 19:32:34.404539     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.253309  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:38 old-k8s-version-490121 kubelet[662]: E0307 19:32:38.404439     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.253495  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:49 old-k8s-version-490121 kubelet[662]: E0307 19:32:49.404401     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.254222  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:51 old-k8s-version-490121 kubelet[662]: E0307 19:32:51.130308     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.254581  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:52 old-k8s-version-490121 kubelet[662]: E0307 19:32:52.840092     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.254790  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:02 old-k8s-version-490121 kubelet[662]: E0307 19:33:02.404637     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.255142  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:05 old-k8s-version-490121 kubelet[662]: E0307 19:33:05.404490     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.255352  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:14 old-k8s-version-490121 kubelet[662]: E0307 19:33:14.404856     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.255709  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:17 old-k8s-version-490121 kubelet[662]: E0307 19:33:17.403976     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.255916  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:27 old-k8s-version-490121 kubelet[662]: E0307 19:33:27.404480     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.256274  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:30 old-k8s-version-490121 kubelet[662]: E0307 19:33:30.404449     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.256625  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:41 old-k8s-version-490121 kubelet[662]: E0307 19:33:41.404057     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.260957  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:42 old-k8s-version-490121 kubelet[662]: E0307 19:33:42.415836     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:12.261998  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:52 old-k8s-version-490121 kubelet[662]: E0307 19:33:52.404978     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.262221  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:55 old-k8s-version-490121 kubelet[662]: E0307 19:33:55.404850     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.264387  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:07 old-k8s-version-490121 kubelet[662]: E0307 19:34:07.404060     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.264606  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:09 old-k8s-version-490121 kubelet[662]: E0307 19:34:09.404722     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.265123  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:22 old-k8s-version-490121 kubelet[662]: E0307 19:34:22.325297     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.266214  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:23 old-k8s-version-490121 kubelet[662]: E0307 19:34:23.328654     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.266457  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:23 old-k8s-version-490121 kubelet[662]: E0307 19:34:23.404300     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.266819  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:34 old-k8s-version-490121 kubelet[662]: E0307 19:34:34.405645     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.267028  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:35 old-k8s-version-490121 kubelet[662]: E0307 19:34:35.404699     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.267382  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:47 old-k8s-version-490121 kubelet[662]: E0307 19:34:47.404007     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.267620  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:49 old-k8s-version-490121 kubelet[662]: E0307 19:34:49.404434     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.267992  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:59 old-k8s-version-490121 kubelet[662]: E0307 19:34:59.404258     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.268205  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:02 old-k8s-version-490121 kubelet[662]: E0307 19:35:02.405853     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.268570  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:12 old-k8s-version-490121 kubelet[662]: E0307 19:35:12.405265     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.268783  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:17 old-k8s-version-490121 kubelet[662]: E0307 19:35:17.404385     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.269134  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:27 old-k8s-version-490121 kubelet[662]: E0307 19:35:27.406835     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.269703  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:32 old-k8s-version-490121 kubelet[662]: E0307 19:35:32.407052     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.270066  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:40 old-k8s-version-490121 kubelet[662]: E0307 19:35:40.404052     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.270279  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:45 old-k8s-version-490121 kubelet[662]: E0307 19:35:45.404603     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.270632  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:52 old-k8s-version-490121 kubelet[662]: E0307 19:35:52.404806     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.270841  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:56 old-k8s-version-490121 kubelet[662]: E0307 19:35:56.408243     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.271203  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: E0307 19:36:05.404245     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.271558  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:09 old-k8s-version-490121 kubelet[662]: E0307 19:36:09.404371     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 19:36:12.271580  762831 logs.go:123] Gathering logs for etcd [05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190] ...
	I0307 19:36:12.271605  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190"
	I0307 19:36:12.320083  762831 logs.go:123] Gathering logs for kube-scheduler [156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4] ...
	I0307 19:36:12.320115  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4"
	I0307 19:36:12.372179  762831 logs.go:123] Gathering logs for kindnet [1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29] ...
	I0307 19:36:12.372386  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29"
	I0307 19:36:12.424384  762831 logs.go:123] Gathering logs for storage-provisioner [ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa] ...
	I0307 19:36:12.424413  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa"
	I0307 19:36:12.473228  762831 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:12.473254  762831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 19:36:12.473310  762831 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 19:36:12.473329  762831 out.go:239]   Mar 07 19:35:45 old-k8s-version-490121 kubelet[662]: E0307 19:35:45.404603     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 19:35:45 old-k8s-version-490121 kubelet[662]: E0307 19:35:45.404603     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.473339  762831 out.go:239]   Mar 07 19:35:52 old-k8s-version-490121 kubelet[662]: E0307 19:35:52.404806     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	  Mar 07 19:35:52 old-k8s-version-490121 kubelet[662]: E0307 19:35:52.404806     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.473348  762831 out.go:239]   Mar 07 19:35:56 old-k8s-version-490121 kubelet[662]: E0307 19:35:56.408243     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 19:35:56 old-k8s-version-490121 kubelet[662]: E0307 19:35:56.408243     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.473363  762831 out.go:239]   Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: E0307 19:36:05.404245     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	  Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: E0307 19:36:05.404245     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.473369  762831 out.go:239]   Mar 07 19:36:09 old-k8s-version-490121 kubelet[662]: E0307 19:36:09.404371     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 19:36:09 old-k8s-version-490121 kubelet[662]: E0307 19:36:09.404371     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 19:36:12.473377  762831 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:12.473382  762831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:22.474361  762831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:36:22.494422  762831 api_server.go:72] duration metric: took 5m48.072661998s to wait for apiserver process to appear ...
	I0307 19:36:22.494445  762831 api_server.go:88] waiting for apiserver healthz status ...
	I0307 19:36:22.494486  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 19:36:22.494542  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 19:36:22.559595  762831 cri.go:89] found id: "ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020"
	I0307 19:36:22.559616  762831 cri.go:89] found id: "99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939"
	I0307 19:36:22.559624  762831 cri.go:89] found id: ""
	I0307 19:36:22.559632  762831 logs.go:276] 2 containers: [ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020 99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939]
	I0307 19:36:22.559686  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.563850  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.568388  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 19:36:22.568464  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 19:36:22.628461  762831 cri.go:89] found id: "5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d"
	I0307 19:36:22.628491  762831 cri.go:89] found id: "05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190"
	I0307 19:36:22.628496  762831 cri.go:89] found id: ""
	I0307 19:36:22.628504  762831 logs.go:276] 2 containers: [5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d 05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190]
	I0307 19:36:22.628558  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.632518  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.636566  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 19:36:22.636648  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 19:36:22.716216  762831 cri.go:89] found id: "9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b"
	I0307 19:36:22.716236  762831 cri.go:89] found id: "f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384"
	I0307 19:36:22.716241  762831 cri.go:89] found id: ""
	I0307 19:36:22.716248  762831 logs.go:276] 2 containers: [9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384]
	I0307 19:36:22.716303  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.720114  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.726286  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 19:36:22.726389  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 19:36:22.775384  762831 cri.go:89] found id: "62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f"
	I0307 19:36:22.775417  762831 cri.go:89] found id: "156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4"
	I0307 19:36:22.775423  762831 cri.go:89] found id: ""
	I0307 19:36:22.775431  762831 logs.go:276] 2 containers: [62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f 156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4]
	I0307 19:36:22.775535  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.779574  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.783105  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 19:36:22.783214  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 19:36:22.834745  762831 cri.go:89] found id: "eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5"
	I0307 19:36:22.834766  762831 cri.go:89] found id: "5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907"
	I0307 19:36:22.834771  762831 cri.go:89] found id: ""
	I0307 19:36:22.834778  762831 logs.go:276] 2 containers: [eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5 5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907]
	I0307 19:36:22.834885  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.838940  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.842643  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 19:36:22.842766  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 19:36:22.885986  762831 cri.go:89] found id: "2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916"
	I0307 19:36:22.886051  762831 cri.go:89] found id: "f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08"
	I0307 19:36:22.886073  762831 cri.go:89] found id: ""
	I0307 19:36:22.886087  762831 logs.go:276] 2 containers: [2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916 f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08]
	I0307 19:36:22.886147  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.889768  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.893174  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 19:36:22.893250  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 19:36:22.932715  762831 cri.go:89] found id: "18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57"
	I0307 19:36:22.932734  762831 cri.go:89] found id: "1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29"
	I0307 19:36:22.932739  762831 cri.go:89] found id: ""
	I0307 19:36:22.932746  762831 logs.go:276] 2 containers: [18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57 1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29]
	I0307 19:36:22.932801  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.936637  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.940288  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 19:36:22.940389  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 19:36:22.982459  762831 cri.go:89] found id: "ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa"
	I0307 19:36:22.982483  762831 cri.go:89] found id: "f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093"
	I0307 19:36:22.982488  762831 cri.go:89] found id: ""
	I0307 19:36:22.982495  762831 logs.go:276] 2 containers: [ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093]
	I0307 19:36:22.982568  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.986251  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.989380  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 19:36:22.989460  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 19:36:23.030709  762831 cri.go:89] found id: "90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a"
	I0307 19:36:23.030732  762831 cri.go:89] found id: ""
	I0307 19:36:23.030739  762831 logs.go:276] 1 containers: [90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a]
	I0307 19:36:23.030816  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:23.034771  762831 logs.go:123] Gathering logs for coredns [9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b] ...
	I0307 19:36:23.034798  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b"
	I0307 19:36:23.078994  762831 logs.go:123] Gathering logs for kube-scheduler [62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f] ...
	I0307 19:36:23.079022  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f"
	I0307 19:36:23.126348  762831 logs.go:123] Gathering logs for kube-proxy [eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5] ...
	I0307 19:36:23.126376  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5"
	I0307 19:36:23.172310  762831 logs.go:123] Gathering logs for storage-provisioner [f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093] ...
	I0307 19:36:23.172375  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093"
	I0307 19:36:23.212617  762831 logs.go:123] Gathering logs for containerd ...
	I0307 19:36:23.212684  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 19:36:23.282162  762831 logs.go:123] Gathering logs for kubelet ...
	I0307 19:36:23.282199  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 19:36:23.342202  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562011     662 reflector.go:138] object-"kube-system"/"kube-proxy-token-vdtz2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vdtz2" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.342735  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562119     662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.343078  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562202     662 reflector.go:138] object-"kube-system"/"coredns-token-zckzk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zckzk" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.343584  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562259     662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.343891  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562331     662 reflector.go:138] object-"kube-system"/"kindnet-token-g7knv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g7knv" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.344406  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562675     662 reflector.go:138] object-"default"/"default-token-rgss9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rgss9" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.344748  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562749     662 reflector.go:138] object-"kube-system"/"metrics-server-token-t79rr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t79rr" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.345285  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.672443     662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-k4nsq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-k4nsq" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.353834  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:54 old-k8s-version-490121 kubelet[662]: E0307 19:30:54.291729     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:23.354037  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:54 old-k8s-version-490121 kubelet[662]: E0307 19:30:54.749946     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.356811  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:07 old-k8s-version-490121 kubelet[662]: E0307 19:31:07.413698     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:23.358512  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:20 old-k8s-version-490121 kubelet[662]: E0307 19:31:20.430619     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.359433  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:23 old-k8s-version-490121 kubelet[662]: E0307 19:31:23.898812     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.359765  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:24 old-k8s-version-490121 kubelet[662]: E0307 19:31:24.894861     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.360200  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:25 old-k8s-version-490121 kubelet[662]: E0307 19:31:25.899230     662 pod_workers.go:191] Error syncing pod 57707985-e3ad-463c-a7a9-150bfe271af7 ("storage-provisioner_kube-system(57707985-e3ad-463c-a7a9-150bfe271af7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(57707985-e3ad-463c-a7a9-150bfe271af7)"
	W0307 19:36:23.364160  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:31 old-k8s-version-490121 kubelet[662]: E0307 19:31:31.411639     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:23.364506  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:32 old-k8s-version-490121 kubelet[662]: E0307 19:31:32.839916     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.365283  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:43 old-k8s-version-490121 kubelet[662]: E0307 19:31:43.406752     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.365814  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:43 old-k8s-version-490121 kubelet[662]: E0307 19:31:43.954053     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.366145  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:52 old-k8s-version-490121 kubelet[662]: E0307 19:31:52.840518     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.366334  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:57 old-k8s-version-490121 kubelet[662]: E0307 19:31:57.404922     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.366928  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:08 old-k8s-version-490121 kubelet[662]: E0307 19:32:08.021627     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.367115  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:08 old-k8s-version-490121 kubelet[662]: E0307 19:32:08.404313     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.367441  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:12 old-k8s-version-490121 kubelet[662]: E0307 19:32:12.840494     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.369872  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:19 old-k8s-version-490121 kubelet[662]: E0307 19:32:19.412896     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:23.370699  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:25 old-k8s-version-490121 kubelet[662]: E0307 19:32:25.404097     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.370901  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:34 old-k8s-version-490121 kubelet[662]: E0307 19:32:34.404539     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.371238  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:38 old-k8s-version-490121 kubelet[662]: E0307 19:32:38.404439     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.371423  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:49 old-k8s-version-490121 kubelet[662]: E0307 19:32:49.404401     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.372010  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:51 old-k8s-version-490121 kubelet[662]: E0307 19:32:51.130308     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.372334  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:52 old-k8s-version-490121 kubelet[662]: E0307 19:32:52.840092     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.372514  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:02 old-k8s-version-490121 kubelet[662]: E0307 19:33:02.404637     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.372838  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:05 old-k8s-version-490121 kubelet[662]: E0307 19:33:05.404490     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.373020  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:14 old-k8s-version-490121 kubelet[662]: E0307 19:33:14.404856     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.373343  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:17 old-k8s-version-490121 kubelet[662]: E0307 19:33:17.403976     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.373536  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:27 old-k8s-version-490121 kubelet[662]: E0307 19:33:27.404480     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.373887  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:30 old-k8s-version-490121 kubelet[662]: E0307 19:33:30.404449     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.374213  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:41 old-k8s-version-490121 kubelet[662]: E0307 19:33:41.404057     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.376719  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:42 old-k8s-version-490121 kubelet[662]: E0307 19:33:42.415836     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:23.377054  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:52 old-k8s-version-490121 kubelet[662]: E0307 19:33:52.404978     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.377238  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:55 old-k8s-version-490121 kubelet[662]: E0307 19:33:55.404850     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.377572  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:07 old-k8s-version-490121 kubelet[662]: E0307 19:34:07.404060     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.377754  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:09 old-k8s-version-490121 kubelet[662]: E0307 19:34:09.404722     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.378206  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:22 old-k8s-version-490121 kubelet[662]: E0307 19:34:22.325297     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.378658  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:23 old-k8s-version-490121 kubelet[662]: E0307 19:34:23.328654     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.378843  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:23 old-k8s-version-490121 kubelet[662]: E0307 19:34:23.404300     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.379171  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:34 old-k8s-version-490121 kubelet[662]: E0307 19:34:34.405645     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.379355  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:35 old-k8s-version-490121 kubelet[662]: E0307 19:34:35.404699     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.379679  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:47 old-k8s-version-490121 kubelet[662]: E0307 19:34:47.404007     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.379860  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:49 old-k8s-version-490121 kubelet[662]: E0307 19:34:49.404434     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.380186  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:59 old-k8s-version-490121 kubelet[662]: E0307 19:34:59.404258     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.380367  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:02 old-k8s-version-490121 kubelet[662]: E0307 19:35:02.405853     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.380716  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:12 old-k8s-version-490121 kubelet[662]: E0307 19:35:12.405265     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.380901  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:17 old-k8s-version-490121 kubelet[662]: E0307 19:35:17.404385     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.381224  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:27 old-k8s-version-490121 kubelet[662]: E0307 19:35:27.406835     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.381405  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:32 old-k8s-version-490121 kubelet[662]: E0307 19:35:32.407052     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.381735  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:40 old-k8s-version-490121 kubelet[662]: E0307 19:35:40.404052     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.382360  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:45 old-k8s-version-490121 kubelet[662]: E0307 19:35:45.404603     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.382702  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:52 old-k8s-version-490121 kubelet[662]: E0307 19:35:52.404806     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.382887  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:56 old-k8s-version-490121 kubelet[662]: E0307 19:35:56.408243     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.383217  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: E0307 19:36:05.404245     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.383399  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:09 old-k8s-version-490121 kubelet[662]: E0307 19:36:09.404371     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.383723  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:19 old-k8s-version-490121 kubelet[662]: E0307 19:36:19.404863     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.383904  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:20 old-k8s-version-490121 kubelet[662]: E0307 19:36:20.420688     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 19:36:23.383915  762831 logs.go:123] Gathering logs for kube-apiserver [ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020] ...
	I0307 19:36:23.383933  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020"
	I0307 19:36:23.476177  762831 logs.go:123] Gathering logs for kube-controller-manager [f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08] ...
	I0307 19:36:23.476227  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08"
	I0307 19:36:23.572070  762831 logs.go:123] Gathering logs for storage-provisioner [ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa] ...
	I0307 19:36:23.572107  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa"
	I0307 19:36:23.617823  762831 logs.go:123] Gathering logs for kubernetes-dashboard [90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a] ...
	I0307 19:36:23.617853  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a"
	I0307 19:36:23.696040  762831 logs.go:123] Gathering logs for container status ...
	I0307 19:36:23.696108  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:36:23.762456  762831 logs.go:123] Gathering logs for kube-apiserver [99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939] ...
	I0307 19:36:23.762532  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939"
	I0307 19:36:23.835674  762831 logs.go:123] Gathering logs for etcd [05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190] ...
	I0307 19:36:23.835750  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190"
	I0307 19:36:23.903568  762831 logs.go:123] Gathering logs for coredns [f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384] ...
	I0307 19:36:23.903699  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384"
	I0307 19:36:23.952099  762831 logs.go:123] Gathering logs for kindnet [18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57] ...
	I0307 19:36:23.952172  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57"
	I0307 19:36:24.014266  762831 logs.go:123] Gathering logs for kindnet [1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29] ...
	I0307 19:36:24.014344  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29"
	I0307 19:36:24.149466  762831 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:36:24.149546  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:36:24.364732  762831 logs.go:123] Gathering logs for etcd [5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d] ...
	I0307 19:36:24.364896  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d"
	I0307 19:36:24.422381  762831 logs.go:123] Gathering logs for kube-proxy [5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907] ...
	I0307 19:36:24.422452  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907"
	I0307 19:36:24.472397  762831 logs.go:123] Gathering logs for kube-controller-manager [2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916] ...
	I0307 19:36:24.472461  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916"
	I0307 19:36:24.554174  762831 logs.go:123] Gathering logs for dmesg ...
	I0307 19:36:24.554248  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:36:24.577804  762831 logs.go:123] Gathering logs for kube-scheduler [156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4] ...
	I0307 19:36:24.577881  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4"
	I0307 19:36:24.642561  762831 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:24.642626  762831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 19:36:24.642701  762831 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 19:36:24.642745  762831 out.go:239]   Mar 07 19:35:56 old-k8s-version-490121 kubelet[662]: E0307 19:35:56.408243     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 19:35:56 old-k8s-version-490121 kubelet[662]: E0307 19:35:56.408243     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:24.642899  762831 out.go:239]   Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: E0307 19:36:05.404245     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	  Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: E0307 19:36:05.404245     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:24.642937  762831 out.go:239]   Mar 07 19:36:09 old-k8s-version-490121 kubelet[662]: E0307 19:36:09.404371     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 19:36:09 old-k8s-version-490121 kubelet[662]: E0307 19:36:09.404371     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:24.643025  762831 out.go:239]   Mar 07 19:36:19 old-k8s-version-490121 kubelet[662]: E0307 19:36:19.404863     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	  Mar 07 19:36:19 old-k8s-version-490121 kubelet[662]: E0307 19:36:19.404863     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:24.643060  762831 out.go:239]   Mar 07 19:36:20 old-k8s-version-490121 kubelet[662]: E0307 19:36:20.420688     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 19:36:20 old-k8s-version-490121 kubelet[662]: E0307 19:36:20.420688     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 19:36:24.643094  762831 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:24.643117  762831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:34.644283  762831 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0307 19:36:34.654169  762831 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0307 19:36:34.656597  762831 out.go:177] 
	W0307 19:36:34.658535  762831 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0307 19:36:34.658604  762831 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0307 19:36:34.658626  762831 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0307 19:36:34.658632  762831 out.go:239] * 
	* 
	W0307 19:36:34.659694  762831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:36:34.662993  762831 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-490121 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-490121
helpers_test.go:235: (dbg) docker inspect old-k8s-version-490121:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "88e4e15b19a2d24455aa9f8d367b5d28f54f7e1bcdb4dfde532231a256de22c0",
	        "Created": "2024-03-07T19:27:15.489759719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 763020,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-07T19:30:26.830428213Z",
	            "FinishedAt": "2024-03-07T19:30:25.780938358Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/88e4e15b19a2d24455aa9f8d367b5d28f54f7e1bcdb4dfde532231a256de22c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/88e4e15b19a2d24455aa9f8d367b5d28f54f7e1bcdb4dfde532231a256de22c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/88e4e15b19a2d24455aa9f8d367b5d28f54f7e1bcdb4dfde532231a256de22c0/hosts",
	        "LogPath": "/var/lib/docker/containers/88e4e15b19a2d24455aa9f8d367b5d28f54f7e1bcdb4dfde532231a256de22c0/88e4e15b19a2d24455aa9f8d367b5d28f54f7e1bcdb4dfde532231a256de22c0-json.log",
	        "Name": "/old-k8s-version-490121",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-490121:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-490121",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/abbc2bc05533b6ec1f7375adc27f3375613677eb4ca0d72ab824571d558d2fad-init/diff:/var/lib/docker/overlay2/0f2c2bc9ebcb6a090c4ed5f3df98eb2fb852fa3a78be98cc34cd75b1870e6d76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/abbc2bc05533b6ec1f7375adc27f3375613677eb4ca0d72ab824571d558d2fad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/abbc2bc05533b6ec1f7375adc27f3375613677eb4ca0d72ab824571d558d2fad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/abbc2bc05533b6ec1f7375adc27f3375613677eb4ca0d72ab824571d558d2fad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-490121",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-490121/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-490121",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-490121",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-490121",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "48c445440d40d4683a57daf47cc0316fa469d8436433a390600e88774af70963",
	            "SandboxKey": "/var/run/docker/netns/48c445440d40",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-490121": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "88e4e15b19a2",
	                        "old-k8s-version-490121"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "83c5f395470026aef4ea22efddb9b88956697e7a6581a4d5a36fac9ca0f95270",
	                    "EndpointID": "0c6f32f6f314215ee283975bb256db2fd6c1a8398e27a9dd18effa68d75b764f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-490121",
	                        "88e4e15b19a2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-490121 -n old-k8s-version-490121
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-490121 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-490121 logs -n 25: (2.079197574s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-093673                           | force-systemd-flag-093673 | jenkins | v1.32.0 | 07 Mar 24 19:25 UTC | 07 Mar 24 19:26 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-093673                              | force-systemd-flag-093673 | jenkins | v1.32.0 | 07 Mar 24 19:26 UTC | 07 Mar 24 19:26 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-093673                           | force-systemd-flag-093673 | jenkins | v1.32.0 | 07 Mar 24 19:26 UTC | 07 Mar 24 19:26 UTC |
	| start   | -p cert-options-533978                                 | cert-options-533978       | jenkins | v1.32.0 | 07 Mar 24 19:26 UTC | 07 Mar 24 19:27 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-533978 ssh                                | cert-options-533978       | jenkins | v1.32.0 | 07 Mar 24 19:27 UTC | 07 Mar 24 19:27 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-533978 -- sudo                         | cert-options-533978       | jenkins | v1.32.0 | 07 Mar 24 19:27 UTC | 07 Mar 24 19:27 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-533978                                 | cert-options-533978       | jenkins | v1.32.0 | 07 Mar 24 19:27 UTC | 07 Mar 24 19:27 UTC |
	| start   | -p old-k8s-version-490121                              | old-k8s-version-490121    | jenkins | v1.32.0 | 07 Mar 24 19:27 UTC | 07 Mar 24 19:30 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-643500                              | cert-expiration-643500    | jenkins | v1.32.0 | 07 Mar 24 19:29 UTC | 07 Mar 24 19:29 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-643500                              | cert-expiration-643500    | jenkins | v1.32.0 | 07 Mar 24 19:29 UTC | 07 Mar 24 19:29 UTC |
	| start   | -p no-preload-028045                                   | no-preload-028045         | jenkins | v1.32.0 | 07 Mar 24 19:29 UTC | 07 Mar 24 19:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-490121        | old-k8s-version-490121    | jenkins | v1.32.0 | 07 Mar 24 19:30 UTC | 07 Mar 24 19:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-490121                              | old-k8s-version-490121    | jenkins | v1.32.0 | 07 Mar 24 19:30 UTC | 07 Mar 24 19:30 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490121             | old-k8s-version-490121    | jenkins | v1.32.0 | 07 Mar 24 19:30 UTC | 07 Mar 24 19:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-490121                              | old-k8s-version-490121    | jenkins | v1.32.0 | 07 Mar 24 19:30 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-028045             | no-preload-028045         | jenkins | v1.32.0 | 07 Mar 24 19:30 UTC | 07 Mar 24 19:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-028045                                   | no-preload-028045         | jenkins | v1.32.0 | 07 Mar 24 19:30 UTC | 07 Mar 24 19:30 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028045                  | no-preload-028045         | jenkins | v1.32.0 | 07 Mar 24 19:30 UTC | 07 Mar 24 19:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-028045                                   | no-preload-028045         | jenkins | v1.32.0 | 07 Mar 24 19:30 UTC | 07 Mar 24 19:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| image   | no-preload-028045 image list                           | no-preload-028045         | jenkins | v1.32.0 | 07 Mar 24 19:35 UTC | 07 Mar 24 19:35 UTC |
	|         | --format=json                                          |                           |         |         |                     |                     |
	| pause   | -p no-preload-028045                                   | no-preload-028045         | jenkins | v1.32.0 | 07 Mar 24 19:35 UTC | 07 Mar 24 19:35 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| unpause | -p no-preload-028045                                   | no-preload-028045         | jenkins | v1.32.0 | 07 Mar 24 19:35 UTC | 07 Mar 24 19:35 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| delete  | -p no-preload-028045                                   | no-preload-028045         | jenkins | v1.32.0 | 07 Mar 24 19:35 UTC | 07 Mar 24 19:35 UTC |
	| delete  | -p no-preload-028045                                   | no-preload-028045         | jenkins | v1.32.0 | 07 Mar 24 19:35 UTC | 07 Mar 24 19:35 UTC |
	| start   | -p embed-certs-327564                                  | embed-certs-327564        | jenkins | v1.32.0 | 07 Mar 24 19:35 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 19:35:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 19:35:31.855739  771734 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:35:31.855918  771734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:35:31.855946  771734 out.go:304] Setting ErrFile to fd 2...
	I0307 19:35:31.855970  771734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:35:31.856240  771734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 19:35:31.856717  771734 out.go:298] Setting JSON to false
	I0307 19:35:31.857832  771734 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11876,"bootTime":1709828256,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 19:35:31.857934  771734 start.go:139] virtualization:  
	I0307 19:35:31.860533  771734 out.go:177] * [embed-certs-327564] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 19:35:31.863336  771734 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 19:35:31.865010  771734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:35:31.863470  771734 notify.go:220] Checking for updates...
	I0307 19:35:31.867149  771734 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 19:35:31.869263  771734 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	I0307 19:35:31.870993  771734 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 19:35:31.873118  771734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:35:31.875699  771734 config.go:182] Loaded profile config "old-k8s-version-490121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0307 19:35:31.875866  771734 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:35:31.897777  771734 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 19:35:31.897902  771734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 19:35:31.963800  771734 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 19:35:31.953865224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 19:35:31.963914  771734 docker.go:295] overlay module found
	I0307 19:35:31.966513  771734 out.go:177] * Using the docker driver based on user configuration
	I0307 19:35:31.968866  771734 start.go:297] selected driver: docker
	I0307 19:35:31.968884  771734 start.go:901] validating driver "docker" against <nil>
	I0307 19:35:31.968904  771734 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:35:31.969693  771734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 19:35:32.031093  771734 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 19:35:32.021090322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 19:35:32.031266  771734 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:35:32.031498  771734 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:35:32.033678  771734 out.go:177] * Using Docker driver with root privileges
	I0307 19:35:32.035709  771734 cni.go:84] Creating CNI manager for ""
	I0307 19:35:32.035733  771734 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 19:35:32.035743  771734 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 19:35:32.035847  771734 start.go:340] cluster config:
	{Name:embed-certs-327564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-327564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:35:32.040597  771734 out.go:177] * Starting "embed-certs-327564" primary control-plane node in "embed-certs-327564" cluster
	I0307 19:35:32.043149  771734 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 19:35:32.045428  771734 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 19:35:32.047530  771734 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 19:35:32.047606  771734 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 19:35:32.047643  771734 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0307 19:35:32.047655  771734 cache.go:56] Caching tarball of preloaded images
	I0307 19:35:32.047750  771734 preload.go:173] Found /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:35:32.047765  771734 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0307 19:35:32.047869  771734 profile.go:142] Saving config to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/config.json ...
	I0307 19:35:32.047890  771734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/config.json: {Name:mkbcd98b08a982b8279658fceb8e97a6b892072f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:35:32.064555  771734 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0307 19:35:32.064586  771734 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0307 19:35:32.064607  771734 cache.go:194] Successfully downloaded all kic artifacts
	I0307 19:35:32.064636  771734 start.go:360] acquireMachinesLock for embed-certs-327564: {Name:mk4731e9259edf2aeab2afb8a48ec2b1d2c396bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:35:32.065206  771734 start.go:364] duration metric: took 545.819µs to acquireMachinesLock for "embed-certs-327564"
	I0307 19:35:32.065246  771734 start.go:93] Provisioning new machine with config: &{Name:embed-certs-327564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-327564 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 19:35:32.065345  771734 start.go:125] createHost starting for "" (driver="docker")
	I0307 19:35:31.606730  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:33.656753  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:36.129051  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:32.068491  771734 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 19:35:32.068751  771734 start.go:159] libmachine.API.Create for "embed-certs-327564" (driver="docker")
	I0307 19:35:32.068789  771734 client.go:168] LocalClient.Create starting
	I0307 19:35:32.068873  771734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem
	I0307 19:35:32.068912  771734 main.go:141] libmachine: Decoding PEM data...
	I0307 19:35:32.068930  771734 main.go:141] libmachine: Parsing certificate...
	I0307 19:35:32.068987  771734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18239-558171/.minikube/certs/cert.pem
	I0307 19:35:32.069017  771734 main.go:141] libmachine: Decoding PEM data...
	I0307 19:35:32.069027  771734 main.go:141] libmachine: Parsing certificate...
	I0307 19:35:32.069408  771734 cli_runner.go:164] Run: docker network inspect embed-certs-327564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 19:35:32.085099  771734 cli_runner.go:211] docker network inspect embed-certs-327564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 19:35:32.085192  771734 network_create.go:281] running [docker network inspect embed-certs-327564] to gather additional debugging logs...
	I0307 19:35:32.085214  771734 cli_runner.go:164] Run: docker network inspect embed-certs-327564
	W0307 19:35:32.100716  771734 cli_runner.go:211] docker network inspect embed-certs-327564 returned with exit code 1
	I0307 19:35:32.100748  771734 network_create.go:284] error running [docker network inspect embed-certs-327564]: docker network inspect embed-certs-327564: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-327564 not found
	I0307 19:35:32.100760  771734 network_create.go:286] output of [docker network inspect embed-certs-327564]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-327564 not found
	
	** /stderr **
	I0307 19:35:32.100880  771734 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 19:35:32.122953  771734 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-182dc02cf5ca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0e:2f:64:c2} reservation:<nil>}
	I0307 19:35:32.123432  771734 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2f61e5e37e0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ec:4e:01:50} reservation:<nil>}
	I0307 19:35:32.123887  771734 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ac39cac33a3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:37:c5:77:12} reservation:<nil>}
	I0307 19:35:32.124432  771734 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025b9180}
	I0307 19:35:32.124461  771734 network_create.go:124] attempt to create docker network embed-certs-327564 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0307 19:35:32.124520  771734 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-327564 embed-certs-327564
	I0307 19:35:32.186227  771734 network_create.go:108] docker network embed-certs-327564 192.168.76.0/24 created
	I0307 19:35:32.186268  771734 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-327564" container
	I0307 19:35:32.186363  771734 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 19:35:32.202409  771734 cli_runner.go:164] Run: docker volume create embed-certs-327564 --label name.minikube.sigs.k8s.io=embed-certs-327564 --label created_by.minikube.sigs.k8s.io=true
	I0307 19:35:32.218608  771734 oci.go:103] Successfully created a docker volume embed-certs-327564
	I0307 19:35:32.218688  771734 cli_runner.go:164] Run: docker run --rm --name embed-certs-327564-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-327564 --entrypoint /usr/bin/test -v embed-certs-327564:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 19:35:32.911513  771734 oci.go:107] Successfully prepared a docker volume embed-certs-327564
	I0307 19:35:32.911596  771734 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 19:35:32.911616  771734 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 19:35:32.911716  771734 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-327564:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 19:35:38.606994  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:41.105989  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:38.541711  771734 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-327564:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (5.629954813s)
	I0307 19:35:38.541743  771734 kic.go:203] duration metric: took 5.63012328s to extract preloaded images to volume ...
	W0307 19:35:38.541891  771734 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0307 19:35:38.542003  771734 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0307 19:35:38.596251  771734 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-327564 --name embed-certs-327564 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-327564 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-327564 --network embed-certs-327564 --ip 192.168.76.2 --volume embed-certs-327564:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0307 19:35:38.931797  771734 cli_runner.go:164] Run: docker container inspect embed-certs-327564 --format={{.State.Running}}
	I0307 19:35:38.965168  771734 cli_runner.go:164] Run: docker container inspect embed-certs-327564 --format={{.State.Status}}
	I0307 19:35:38.985759  771734 cli_runner.go:164] Run: docker exec embed-certs-327564 stat /var/lib/dpkg/alternatives/iptables
	I0307 19:35:39.055098  771734 oci.go:144] the created container "embed-certs-327564" has a running status.
	I0307 19:35:39.055136  771734 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18239-558171/.minikube/machines/embed-certs-327564/id_rsa...
	I0307 19:35:39.459163  771734 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18239-558171/.minikube/machines/embed-certs-327564/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0307 19:35:39.488783  771734 cli_runner.go:164] Run: docker container inspect embed-certs-327564 --format={{.State.Status}}
	I0307 19:35:39.521171  771734 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0307 19:35:39.521190  771734 kic_runner.go:114] Args: [docker exec --privileged embed-certs-327564 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0307 19:35:39.600041  771734 cli_runner.go:164] Run: docker container inspect embed-certs-327564 --format={{.State.Status}}
	I0307 19:35:39.628432  771734 machine.go:94] provisionDockerMachine start ...
	I0307 19:35:39.628518  771734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327564
	I0307 19:35:39.657856  771734 main.go:141] libmachine: Using SSH client type: native
	I0307 19:35:39.658142  771734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I0307 19:35:39.658158  771734 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 19:35:39.658955  771734 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59958->127.0.0.1:33818: read: connection reset by peer
	I0307 19:35:42.792994  771734 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-327564
	
	I0307 19:35:42.793020  771734 ubuntu.go:169] provisioning hostname "embed-certs-327564"
	I0307 19:35:42.793096  771734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327564
	I0307 19:35:42.812606  771734 main.go:141] libmachine: Using SSH client type: native
	I0307 19:35:42.812858  771734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I0307 19:35:42.812884  771734 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-327564 && echo "embed-certs-327564" | sudo tee /etc/hostname
	I0307 19:35:42.958405  771734 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-327564
	
	I0307 19:35:42.958483  771734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327564
	I0307 19:35:42.976423  771734 main.go:141] libmachine: Using SSH client type: native
	I0307 19:35:42.976669  771734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I0307 19:35:42.976686  771734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-327564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-327564/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-327564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 19:35:43.108420  771734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 19:35:43.108502  771734 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18239-558171/.minikube CaCertPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18239-558171/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18239-558171/.minikube}
	I0307 19:35:43.108561  771734 ubuntu.go:177] setting up certificates
	I0307 19:35:43.108590  771734 provision.go:84] configureAuth start
	I0307 19:35:43.108705  771734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327564
	I0307 19:35:43.124220  771734 provision.go:143] copyHostCerts
	I0307 19:35:43.124285  771734 exec_runner.go:144] found /home/jenkins/minikube-integration/18239-558171/.minikube/ca.pem, removing ...
	I0307 19:35:43.124294  771734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18239-558171/.minikube/ca.pem
	I0307 19:35:43.124376  771734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18239-558171/.minikube/ca.pem (1082 bytes)
	I0307 19:35:43.124462  771734 exec_runner.go:144] found /home/jenkins/minikube-integration/18239-558171/.minikube/cert.pem, removing ...
	I0307 19:35:43.124467  771734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18239-558171/.minikube/cert.pem
	I0307 19:35:43.124497  771734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18239-558171/.minikube/cert.pem (1123 bytes)
	I0307 19:35:43.124546  771734 exec_runner.go:144] found /home/jenkins/minikube-integration/18239-558171/.minikube/key.pem, removing ...
	I0307 19:35:43.124551  771734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18239-558171/.minikube/key.pem
	I0307 19:35:43.124572  771734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18239-558171/.minikube/key.pem (1675 bytes)
	I0307 19:35:43.124616  771734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18239-558171/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca-key.pem org=jenkins.embed-certs-327564 san=[127.0.0.1 192.168.76.2 embed-certs-327564 localhost minikube]
	I0307 19:35:43.278562  771734 provision.go:177] copyRemoteCerts
	I0307 19:35:43.278666  771734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 19:35:43.278738  771734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327564
	I0307 19:35:43.294583  771734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/embed-certs-327564/id_rsa Username:docker}
	I0307 19:35:43.390579  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 19:35:43.416544  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 19:35:43.441719  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0307 19:35:43.467195  771734 provision.go:87] duration metric: took 358.566729ms to configureAuth
	I0307 19:35:43.467224  771734 ubuntu.go:193] setting minikube options for container-runtime
	I0307 19:35:43.467410  771734 config.go:182] Loaded profile config "embed-certs-327564": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 19:35:43.467424  771734 machine.go:97] duration metric: took 3.838975151s to provisionDockerMachine
	I0307 19:35:43.467431  771734 client.go:171] duration metric: took 11.398632348s to LocalClient.Create
	I0307 19:35:43.467445  771734 start.go:167] duration metric: took 11.398696496s to libmachine.API.Create "embed-certs-327564"
	I0307 19:35:43.467452  771734 start.go:293] postStartSetup for "embed-certs-327564" (driver="docker")
	I0307 19:35:43.467462  771734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 19:35:43.467518  771734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 19:35:43.467573  771734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327564
	I0307 19:35:43.483356  771734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/embed-certs-327564/id_rsa Username:docker}
	I0307 19:35:43.582641  771734 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 19:35:43.586218  771734 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 19:35:43.586257  771734 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 19:35:43.586268  771734 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 19:35:43.586275  771734 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0307 19:35:43.586286  771734 filesync.go:126] Scanning /home/jenkins/minikube-integration/18239-558171/.minikube/addons for local assets ...
	I0307 19:35:43.586351  771734 filesync.go:126] Scanning /home/jenkins/minikube-integration/18239-558171/.minikube/files for local assets ...
	I0307 19:35:43.586435  771734 filesync.go:149] local asset: /home/jenkins/minikube-integration/18239-558171/.minikube/files/etc/ssl/certs/5635812.pem -> 5635812.pem in /etc/ssl/certs
	I0307 19:35:43.586553  771734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 19:35:43.595355  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/files/etc/ssl/certs/5635812.pem --> /etc/ssl/certs/5635812.pem (1708 bytes)
	I0307 19:35:43.625672  771734 start.go:296] duration metric: took 158.205709ms for postStartSetup
	I0307 19:35:43.626063  771734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327564
	I0307 19:35:43.641961  771734 profile.go:142] Saving config to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/config.json ...
	I0307 19:35:43.642249  771734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 19:35:43.642319  771734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327564
	I0307 19:35:43.674467  771734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/embed-certs-327564/id_rsa Username:docker}
	I0307 19:35:43.766317  771734 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 19:35:43.771235  771734 start.go:128] duration metric: took 11.705875012s to createHost
	I0307 19:35:43.771257  771734 start.go:83] releasing machines lock for "embed-certs-327564", held for 11.706031745s
	I0307 19:35:43.771331  771734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327564
	I0307 19:35:43.787492  771734 ssh_runner.go:195] Run: cat /version.json
	I0307 19:35:43.787543  771734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 19:35:43.787553  771734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327564
	I0307 19:35:43.787606  771734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327564
	I0307 19:35:43.817766  771734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/embed-certs-327564/id_rsa Username:docker}
	I0307 19:35:43.827216  771734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/embed-certs-327564/id_rsa Username:docker}
	I0307 19:35:43.916988  771734 ssh_runner.go:195] Run: systemctl --version
	I0307 19:35:44.036940  771734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 19:35:44.041298  771734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 19:35:44.069506  771734 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 19:35:44.069710  771734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 19:35:44.101236  771734 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0307 19:35:44.101261  771734 start.go:494] detecting cgroup driver to use...
	I0307 19:35:44.101294  771734 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 19:35:44.101347  771734 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 19:35:44.117392  771734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 19:35:44.130342  771734 docker.go:217] disabling cri-docker service (if available) ...
	I0307 19:35:44.130425  771734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0307 19:35:44.144765  771734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0307 19:35:44.159739  771734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0307 19:35:44.254737  771734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0307 19:35:44.361182  771734 docker.go:233] disabling docker service ...
	I0307 19:35:44.361266  771734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0307 19:35:44.387621  771734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0307 19:35:44.410439  771734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0307 19:35:44.530474  771734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0307 19:35:44.627638  771734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0307 19:35:44.639002  771734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 19:35:44.655805  771734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 19:35:44.666038  771734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 19:35:44.676278  771734 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 19:35:44.676356  771734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 19:35:44.687391  771734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:35:44.696975  771734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 19:35:44.708461  771734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:35:44.718031  771734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 19:35:44.726924  771734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 19:35:44.736667  771734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 19:35:44.745319  771734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 19:35:44.754586  771734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:35:44.849367  771734 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 19:35:44.996466  771734 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0307 19:35:44.996610  771734 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 19:35:45.000581  771734 start.go:562] Will wait 60s for crictl version
	I0307 19:35:45.000709  771734 ssh_runner.go:195] Run: which crictl
	I0307 19:35:45.005814  771734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 19:35:45.085227  771734 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0307 19:35:45.085390  771734 ssh_runner.go:195] Run: containerd --version
	I0307 19:35:45.129159  771734 ssh_runner.go:195] Run: containerd --version
	I0307 19:35:45.164974  771734 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0307 19:35:43.106627  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:45.115823  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:45.167376  771734 cli_runner.go:164] Run: docker network inspect embed-certs-327564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 19:35:45.188092  771734 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0307 19:35:45.193236  771734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 19:35:45.219469  771734 kubeadm.go:877] updating cluster {Name:embed-certs-327564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-327564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 19:35:45.219636  771734 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 19:35:45.219712  771734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 19:35:45.281810  771734 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 19:35:45.281841  771734 containerd.go:519] Images already preloaded, skipping extraction
	I0307 19:35:45.281911  771734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 19:35:45.329888  771734 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 19:35:45.329914  771734 cache_images.go:84] Images are preloaded, skipping loading
	I0307 19:35:45.329922  771734 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.28.4 containerd true true} ...
	I0307 19:35:45.330073  771734 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-327564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-327564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 19:35:45.330166  771734 ssh_runner.go:195] Run: sudo crictl info
	I0307 19:35:45.376804  771734 cni.go:84] Creating CNI manager for ""
	I0307 19:35:45.376831  771734 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 19:35:45.376842  771734 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 19:35:45.376887  771734 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-327564 NodeName:embed-certs-327564 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 19:35:45.377042  771734 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-327564"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 19:35:45.377132  771734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 19:35:45.386893  771734 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 19:35:45.387009  771734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 19:35:45.396611  771734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0307 19:35:45.418050  771734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 19:35:45.437290  771734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0307 19:35:45.457239  771734 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0307 19:35:45.460855  771734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 19:35:45.473274  771734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:35:45.575696  771734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:35:45.599165  771734 certs.go:68] Setting up /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564 for IP: 192.168.76.2
	I0307 19:35:45.599207  771734 certs.go:194] generating shared ca certs ...
	I0307 19:35:45.599225  771734 certs.go:226] acquiring lock for ca certs: {Name:mke14792b1616e9503645c7147aed38043ea5d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:35:45.599390  771734 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18239-558171/.minikube/ca.key
	I0307 19:35:45.599438  771734 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.key
	I0307 19:35:45.599450  771734 certs.go:256] generating profile certs ...
	I0307 19:35:45.599513  771734 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/client.key
	I0307 19:35:45.599534  771734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/client.crt with IP's: []
	I0307 19:35:46.980480  771734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/client.crt ...
	I0307 19:35:46.980509  771734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/client.crt: {Name:mk774ef848685819579493f7e6b8079e43b50e88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:35:46.981231  771734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/client.key ...
	I0307 19:35:46.981248  771734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/client.key: {Name:mk4dd5f6d1905a200873b0100af2d813ae9c8774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:35:46.981702  771734 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.key.2c89eb66
	I0307 19:35:46.981723  771734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.crt.2c89eb66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0307 19:35:47.491417  771734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.crt.2c89eb66 ...
	I0307 19:35:47.491453  771734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.crt.2c89eb66: {Name:mk92b131c7677506750fab443c306129e18ffab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:35:47.492216  771734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.key.2c89eb66 ...
	I0307 19:35:47.492244  771734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.key.2c89eb66: {Name:mk7649ad94abeadd4e18f36496675feeb06a1f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:35:47.492917  771734 certs.go:381] copying /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.crt.2c89eb66 -> /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.crt
	I0307 19:35:47.493051  771734 certs.go:385] copying /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.key.2c89eb66 -> /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.key
	I0307 19:35:47.493125  771734 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/proxy-client.key
	I0307 19:35:47.493152  771734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/proxy-client.crt with IP's: []
	I0307 19:35:48.171680  771734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/proxy-client.crt ...
	I0307 19:35:48.171712  771734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/proxy-client.crt: {Name:mke81cc67d6bc4c483659945a3c4feeb4c204416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:35:48.171908  771734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/proxy-client.key ...
	I0307 19:35:48.171925  771734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/proxy-client.key: {Name:mk4d217cc90e807798441a74769a6c7aa5f0c6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:35:48.172118  771734 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/563581.pem (1338 bytes)
	W0307 19:35:48.172164  771734 certs.go:480] ignoring /home/jenkins/minikube-integration/18239-558171/.minikube/certs/563581_empty.pem, impossibly tiny 0 bytes
	I0307 19:35:48.172178  771734 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 19:35:48.172203  771734 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/ca.pem (1082 bytes)
	I0307 19:35:48.172232  771734 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/cert.pem (1123 bytes)
	I0307 19:35:48.172258  771734 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/certs/key.pem (1675 bytes)
	I0307 19:35:48.172304  771734 certs.go:484] found cert: /home/jenkins/minikube-integration/18239-558171/.minikube/files/etc/ssl/certs/5635812.pem (1708 bytes)
	I0307 19:35:48.172943  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 19:35:48.198072  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 19:35:48.222711  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 19:35:48.252898  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 19:35:48.277286  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0307 19:35:48.300988  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 19:35:48.325006  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 19:35:48.348727  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/embed-certs-327564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 19:35:48.373322  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/certs/563581.pem --> /usr/share/ca-certificates/563581.pem (1338 bytes)
	I0307 19:35:48.398646  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/files/etc/ssl/certs/5635812.pem --> /usr/share/ca-certificates/5635812.pem (1708 bytes)
	I0307 19:35:48.425615  771734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18239-558171/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 19:35:48.450725  771734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 19:35:48.469827  771734 ssh_runner.go:195] Run: openssl version
	I0307 19:35:48.477095  771734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5635812.pem && ln -fs /usr/share/ca-certificates/5635812.pem /etc/ssl/certs/5635812.pem"
	I0307 19:35:48.487390  771734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5635812.pem
	I0307 19:35:48.490907  771734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:50 /usr/share/ca-certificates/5635812.pem
	I0307 19:35:48.490972  771734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5635812.pem
	I0307 19:35:48.498293  771734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5635812.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 19:35:48.507802  771734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 19:35:48.517690  771734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:35:48.521700  771734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:35:48.521805  771734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:35:48.528689  771734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 19:35:48.538402  771734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/563581.pem && ln -fs /usr/share/ca-certificates/563581.pem /etc/ssl/certs/563581.pem"
	I0307 19:35:48.547792  771734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/563581.pem
	I0307 19:35:48.551414  771734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:50 /usr/share/ca-certificates/563581.pem
	I0307 19:35:48.551482  771734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/563581.pem
	I0307 19:35:48.558671  771734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/563581.pem /etc/ssl/certs/51391683.0"
	I0307 19:35:48.568475  771734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 19:35:48.571930  771734 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 19:35:48.571981  771734 kubeadm.go:391] StartCluster: {Name:embed-certs-327564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-327564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:35:48.572065  771734 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 19:35:48.572122  771734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 19:35:48.634324  771734 cri.go:89] found id: ""
	I0307 19:35:48.634412  771734 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 19:35:48.649398  771734 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 19:35:48.660975  771734 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0307 19:35:48.661148  771734 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 19:35:48.676050  771734 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 19:35:48.676130  771734 kubeadm.go:156] found existing configuration files:
	
	I0307 19:35:48.676252  771734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0307 19:35:48.690366  771734 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 19:35:48.690539  771734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 19:35:48.702610  771734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0307 19:35:48.713159  771734 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 19:35:48.713252  771734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 19:35:48.722907  771734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0307 19:35:48.732645  771734 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 19:35:48.732715  771734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 19:35:48.742880  771734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0307 19:35:48.751488  771734 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 19:35:48.751627  771734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 19:35:48.759939  771734 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0307 19:35:48.804662  771734 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0307 19:35:48.805017  771734 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 19:35:48.847976  771734 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0307 19:35:48.848067  771734 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0307 19:35:48.848118  771734 kubeadm.go:309] OS: Linux
	I0307 19:35:48.848177  771734 kubeadm.go:309] CGROUPS_CPU: enabled
	I0307 19:35:48.848235  771734 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0307 19:35:48.848286  771734 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0307 19:35:48.848345  771734 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0307 19:35:48.848402  771734 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0307 19:35:48.848462  771734 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0307 19:35:48.848513  771734 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0307 19:35:48.848582  771734 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0307 19:35:48.848640  771734 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0307 19:35:48.922663  771734 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 19:35:48.922848  771734 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 19:35:48.922998  771734 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 19:35:49.162474  771734 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 19:35:47.609214  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:50.109551  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:49.165445  771734 out.go:204]   - Generating certificates and keys ...
	I0307 19:35:49.165660  771734 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 19:35:49.165772  771734 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 19:35:50.485888  771734 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 19:35:50.863723  771734 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0307 19:35:51.123853  771734 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0307 19:35:51.839259  771734 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0307 19:35:52.606769  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:54.611278  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:52.579318  771734 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0307 19:35:52.579603  771734 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [embed-certs-327564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0307 19:35:52.964269  771734 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0307 19:35:52.964582  771734 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-327564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0307 19:35:53.372229  771734 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 19:35:55.022714  771734 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 19:35:55.341864  771734 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0307 19:35:55.341940  771734 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 19:35:55.917625  771734 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 19:35:56.527757  771734 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 19:35:56.852222  771734 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 19:35:57.738026  771734 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 19:35:57.738892  771734 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 19:35:57.741727  771734 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 19:35:57.107679  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:59.110902  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:01.134439  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:35:57.744358  771734 out.go:204]   - Booting up control plane ...
	I0307 19:35:57.744456  771734 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 19:35:57.744538  771734 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 19:35:57.745227  771734 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 19:35:57.759395  771734 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 19:35:57.760489  771734 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 19:35:57.760644  771734 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 19:35:57.883823  771734 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 19:36:03.606584  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:05.614979  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:05.887317  771734 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.003640 seconds
	I0307 19:36:05.887471  771734 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 19:36:05.902754  771734 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 19:36:06.427886  771734 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 19:36:06.428085  771734 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-327564 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 19:36:06.940823  771734 kubeadm.go:309] [bootstrap-token] Using token: xgqx1n.1ci97fwt6benlf6i
	I0307 19:36:06.942578  771734 out.go:204]   - Configuring RBAC rules ...
	I0307 19:36:06.942697  771734 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 19:36:06.951017  771734 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 19:36:06.962302  771734 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 19:36:06.967349  771734 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 19:36:06.972119  771734 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 19:36:06.977887  771734 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 19:36:06.993730  771734 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 19:36:07.297992  771734 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 19:36:07.358160  771734 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 19:36:07.359473  771734 kubeadm.go:309] 
	I0307 19:36:07.359563  771734 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 19:36:07.359575  771734 kubeadm.go:309] 
	I0307 19:36:07.359659  771734 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 19:36:07.359669  771734 kubeadm.go:309] 
	I0307 19:36:07.359694  771734 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 19:36:07.359756  771734 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 19:36:07.359807  771734 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 19:36:07.359819  771734 kubeadm.go:309] 
	I0307 19:36:07.359872  771734 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 19:36:07.359882  771734 kubeadm.go:309] 
	I0307 19:36:07.359934  771734 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 19:36:07.359943  771734 kubeadm.go:309] 
	I0307 19:36:07.359996  771734 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 19:36:07.360075  771734 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 19:36:07.360147  771734 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 19:36:07.360155  771734 kubeadm.go:309] 
	I0307 19:36:07.360236  771734 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 19:36:07.360313  771734 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 19:36:07.360322  771734 kubeadm.go:309] 
	I0307 19:36:07.360403  771734 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xgqx1n.1ci97fwt6benlf6i \
	I0307 19:36:07.360505  771734 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:17caf06007e0764138c1f585dfe115b801f228bdeee3cba3ea5bff5870a6e807 \
	I0307 19:36:07.360529  771734 kubeadm.go:309] 	--control-plane 
	I0307 19:36:07.360538  771734 kubeadm.go:309] 
	I0307 19:36:07.360619  771734 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 19:36:07.360628  771734 kubeadm.go:309] 
	I0307 19:36:07.360706  771734 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xgqx1n.1ci97fwt6benlf6i \
	I0307 19:36:07.360808  771734 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:17caf06007e0764138c1f585dfe115b801f228bdeee3cba3ea5bff5870a6e807 
	I0307 19:36:07.370691  771734 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0307 19:36:07.370812  771734 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 19:36:07.370837  771734 cni.go:84] Creating CNI manager for ""
	I0307 19:36:07.370849  771734 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 19:36:07.373311  771734 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0307 19:36:08.112034  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:10.606231  762831 pod_ready.go:102] pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:10.606263  762831 pod_ready.go:81] duration metric: took 4m0.006564846s for pod "metrics-server-9975d5f86-w9hn6" in "kube-system" namespace to be "Ready" ...
	E0307 19:36:10.606274  762831 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0307 19:36:10.606283  762831 pod_ready.go:38] duration metric: took 5m19.314418834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 19:36:10.606296  762831 api_server.go:52] waiting for apiserver process to appear ...
	I0307 19:36:10.606341  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 19:36:10.606408  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 19:36:10.656033  762831 cri.go:89] found id: "ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020"
	I0307 19:36:10.656094  762831 cri.go:89] found id: "99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939"
	I0307 19:36:10.656106  762831 cri.go:89] found id: ""
	I0307 19:36:10.656114  762831 logs.go:276] 2 containers: [ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020 99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939]
	I0307 19:36:10.656170  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.659898  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.663752  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 19:36:10.663849  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 19:36:10.711143  762831 cri.go:89] found id: "5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d"
	I0307 19:36:10.711195  762831 cri.go:89] found id: "05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190"
	I0307 19:36:10.711214  762831 cri.go:89] found id: ""
	I0307 19:36:10.711239  762831 logs.go:276] 2 containers: [5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d 05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190]
	I0307 19:36:10.711326  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.715686  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.719890  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 19:36:10.720008  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 19:36:10.782775  762831 cri.go:89] found id: "9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b"
	I0307 19:36:10.782850  762831 cri.go:89] found id: "f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384"
	I0307 19:36:10.782869  762831 cri.go:89] found id: ""
	I0307 19:36:10.782892  762831 logs.go:276] 2 containers: [9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384]
	I0307 19:36:10.782980  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.787225  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.791833  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 19:36:10.791972  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 19:36:10.850718  762831 cri.go:89] found id: "62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f"
	I0307 19:36:10.850794  762831 cri.go:89] found id: "156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4"
	I0307 19:36:10.850813  762831 cri.go:89] found id: ""
	I0307 19:36:10.850837  762831 logs.go:276] 2 containers: [62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f 156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4]
	I0307 19:36:10.850923  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.854985  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.858671  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 19:36:10.858755  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 19:36:10.909124  762831 cri.go:89] found id: "eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5"
	I0307 19:36:10.909149  762831 cri.go:89] found id: "5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907"
	I0307 19:36:10.909155  762831 cri.go:89] found id: ""
	I0307 19:36:10.909162  762831 logs.go:276] 2 containers: [eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5 5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907]
	I0307 19:36:10.909233  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.913385  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.917239  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 19:36:10.917311  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 19:36:10.962456  762831 cri.go:89] found id: "2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916"
	I0307 19:36:10.962475  762831 cri.go:89] found id: "f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08"
	I0307 19:36:10.962480  762831 cri.go:89] found id: ""
	I0307 19:36:10.962487  762831 logs.go:276] 2 containers: [2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916 f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08]
	I0307 19:36:10.962547  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.966970  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:10.970855  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 19:36:10.970976  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 19:36:11.016102  762831 cri.go:89] found id: "18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57"
	I0307 19:36:11.016128  762831 cri.go:89] found id: "1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29"
	I0307 19:36:11.016134  762831 cri.go:89] found id: ""
	I0307 19:36:11.016141  762831 logs.go:276] 2 containers: [18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57 1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29]
	I0307 19:36:11.016221  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:11.019977  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:11.023680  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 19:36:11.023815  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 19:36:11.062030  762831 cri.go:89] found id: "ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa"
	I0307 19:36:11.062103  762831 cri.go:89] found id: "f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093"
	I0307 19:36:11.062141  762831 cri.go:89] found id: ""
	I0307 19:36:11.062151  762831 logs.go:276] 2 containers: [ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093]
	I0307 19:36:11.062215  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:11.066359  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:11.070205  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 19:36:11.070279  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 19:36:11.109710  762831 cri.go:89] found id: "90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a"
	I0307 19:36:11.109783  762831 cri.go:89] found id: ""
	I0307 19:36:11.109813  762831 logs.go:276] 1 containers: [90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a]
	I0307 19:36:11.109911  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:11.114323  762831 logs.go:123] Gathering logs for kube-apiserver [ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020] ...
	I0307 19:36:11.114404  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020"
	I0307 19:36:11.176440  762831 logs.go:123] Gathering logs for etcd [5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d] ...
	I0307 19:36:11.176475  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d"
	I0307 19:36:11.230993  762831 logs.go:123] Gathering logs for kube-scheduler [62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f] ...
	I0307 19:36:11.231024  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f"
	I0307 19:36:11.300327  762831 logs.go:123] Gathering logs for kube-proxy [eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5] ...
	I0307 19:36:11.300359  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5"
	I0307 19:36:07.375251  771734 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 19:36:07.392229  771734 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0307 19:36:07.392292  771734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0307 19:36:07.423731  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 19:36:08.486857  771734 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.063085403s)
	I0307 19:36:08.486899  771734 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 19:36:08.487028  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:08.487123  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-327564 minikube.k8s.io/updated_at=2024_03_07T19_36_08_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=526fad16cb967ea3a5b243df32efb88cb58b81ec minikube.k8s.io/name=embed-certs-327564 minikube.k8s.io/primary=true
	I0307 19:36:08.497308  771734 ops.go:34] apiserver oom_adj: -16
	I0307 19:36:08.710034  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:09.210820  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:09.711072  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:10.210737  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:10.710463  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:11.210338  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:11.710103  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:11.354977  762831 logs.go:123] Gathering logs for storage-provisioner [f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093] ...
	I0307 19:36:11.355007  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093"
	I0307 19:36:11.398803  762831 logs.go:123] Gathering logs for containerd ...
	I0307 19:36:11.398834  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 19:36:11.458507  762831 logs.go:123] Gathering logs for kube-controller-manager [f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08] ...
	I0307 19:36:11.458543  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08"
	I0307 19:36:11.532204  762831 logs.go:123] Gathering logs for kubernetes-dashboard [90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a] ...
	I0307 19:36:11.535311  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a"
	I0307 19:36:11.601998  762831 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:36:11.602025  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:36:11.766962  762831 logs.go:123] Gathering logs for kube-apiserver [99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939] ...
	I0307 19:36:11.766998  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939"
	I0307 19:36:11.869356  762831 logs.go:123] Gathering logs for coredns [9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b] ...
	I0307 19:36:11.869391  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b"
	I0307 19:36:11.909582  762831 logs.go:123] Gathering logs for coredns [f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384] ...
	I0307 19:36:11.909612  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384"
	I0307 19:36:11.955107  762831 logs.go:123] Gathering logs for kube-proxy [5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907] ...
	I0307 19:36:11.955137  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907"
	I0307 19:36:11.995979  762831 logs.go:123] Gathering logs for kube-controller-manager [2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916] ...
	I0307 19:36:11.996008  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916"
	I0307 19:36:12.056706  762831 logs.go:123] Gathering logs for dmesg ...
	I0307 19:36:12.056737  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:36:12.076984  762831 logs.go:123] Gathering logs for kindnet [18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57] ...
	I0307 19:36:12.077015  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57"
	I0307 19:36:12.120010  762831 logs.go:123] Gathering logs for container status ...
	I0307 19:36:12.120041  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:36:12.163174  762831 logs.go:123] Gathering logs for kubelet ...
	I0307 19:36:12.163205  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 19:36:12.218431  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562011     662 reflector.go:138] object-"kube-system"/"kube-proxy-token-vdtz2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vdtz2" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.218682  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562119     662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.218920  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562202     662 reflector.go:138] object-"kube-system"/"coredns-token-zckzk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zckzk" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.219145  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562259     662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.220131  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562331     662 reflector.go:138] object-"kube-system"/"kindnet-token-g7knv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g7knv" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.220392  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562675     662 reflector.go:138] object-"default"/"default-token-rgss9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rgss9" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.220637  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562749     662 reflector.go:138] object-"kube-system"/"metrics-server-token-t79rr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t79rr" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.220886  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.672443     662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-k4nsq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-k4nsq" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:12.233667  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:54 old-k8s-version-490121 kubelet[662]: E0307 19:30:54.291729     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:12.233913  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:54 old-k8s-version-490121 kubelet[662]: E0307 19:30:54.749946     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.236687  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:07 old-k8s-version-490121 kubelet[662]: E0307 19:31:07.413698     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:12.238386  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:20 old-k8s-version-490121 kubelet[662]: E0307 19:31:20.430619     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.239318  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:23 old-k8s-version-490121 kubelet[662]: E0307 19:31:23.898812     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.239668  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:24 old-k8s-version-490121 kubelet[662]: E0307 19:31:24.894861     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.240124  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:25 old-k8s-version-490121 kubelet[662]: E0307 19:31:25.899230     662 pod_workers.go:191] Error syncing pod 57707985-e3ad-463c-a7a9-150bfe271af7 ("storage-provisioner_kube-system(57707985-e3ad-463c-a7a9-150bfe271af7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(57707985-e3ad-463c-a7a9-150bfe271af7)"
	W0307 19:36:12.242571  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:31 old-k8s-version-490121 kubelet[662]: E0307 19:31:31.411639     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:12.242927  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:32 old-k8s-version-490121 kubelet[662]: E0307 19:31:32.839916     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.243723  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:43 old-k8s-version-490121 kubelet[662]: E0307 19:31:43.406752     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.244204  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:43 old-k8s-version-490121 kubelet[662]: E0307 19:31:43.954053     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.244574  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:52 old-k8s-version-490121 kubelet[662]: E0307 19:31:52.840518     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.244771  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:57 old-k8s-version-490121 kubelet[662]: E0307 19:31:57.404922     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.245369  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:08 old-k8s-version-490121 kubelet[662]: E0307 19:32:08.021627     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.249632  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:08 old-k8s-version-490121 kubelet[662]: E0307 19:32:08.404313     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.249991  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:12 old-k8s-version-490121 kubelet[662]: E0307 19:32:12.840494     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.252464  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:19 old-k8s-version-490121 kubelet[662]: E0307 19:32:19.412896     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:12.252795  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:25 old-k8s-version-490121 kubelet[662]: E0307 19:32:25.404097     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.252980  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:34 old-k8s-version-490121 kubelet[662]: E0307 19:32:34.404539     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.253309  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:38 old-k8s-version-490121 kubelet[662]: E0307 19:32:38.404439     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.253495  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:49 old-k8s-version-490121 kubelet[662]: E0307 19:32:49.404401     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.254222  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:51 old-k8s-version-490121 kubelet[662]: E0307 19:32:51.130308     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.254581  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:52 old-k8s-version-490121 kubelet[662]: E0307 19:32:52.840092     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.254790  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:02 old-k8s-version-490121 kubelet[662]: E0307 19:33:02.404637     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.255142  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:05 old-k8s-version-490121 kubelet[662]: E0307 19:33:05.404490     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.255352  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:14 old-k8s-version-490121 kubelet[662]: E0307 19:33:14.404856     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.255709  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:17 old-k8s-version-490121 kubelet[662]: E0307 19:33:17.403976     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.255916  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:27 old-k8s-version-490121 kubelet[662]: E0307 19:33:27.404480     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.256274  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:30 old-k8s-version-490121 kubelet[662]: E0307 19:33:30.404449     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.256625  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:41 old-k8s-version-490121 kubelet[662]: E0307 19:33:41.404057     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.260957  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:42 old-k8s-version-490121 kubelet[662]: E0307 19:33:42.415836     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:12.261998  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:52 old-k8s-version-490121 kubelet[662]: E0307 19:33:52.404978     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.262221  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:55 old-k8s-version-490121 kubelet[662]: E0307 19:33:55.404850     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.264387  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:07 old-k8s-version-490121 kubelet[662]: E0307 19:34:07.404060     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.264606  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:09 old-k8s-version-490121 kubelet[662]: E0307 19:34:09.404722     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.265123  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:22 old-k8s-version-490121 kubelet[662]: E0307 19:34:22.325297     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.266214  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:23 old-k8s-version-490121 kubelet[662]: E0307 19:34:23.328654     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.266457  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:23 old-k8s-version-490121 kubelet[662]: E0307 19:34:23.404300     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.266819  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:34 old-k8s-version-490121 kubelet[662]: E0307 19:34:34.405645     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.267028  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:35 old-k8s-version-490121 kubelet[662]: E0307 19:34:35.404699     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.267382  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:47 old-k8s-version-490121 kubelet[662]: E0307 19:34:47.404007     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.267620  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:49 old-k8s-version-490121 kubelet[662]: E0307 19:34:49.404434     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.267992  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:59 old-k8s-version-490121 kubelet[662]: E0307 19:34:59.404258     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.268205  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:02 old-k8s-version-490121 kubelet[662]: E0307 19:35:02.405853     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.268570  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:12 old-k8s-version-490121 kubelet[662]: E0307 19:35:12.405265     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.268783  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:17 old-k8s-version-490121 kubelet[662]: E0307 19:35:17.404385     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.269134  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:27 old-k8s-version-490121 kubelet[662]: E0307 19:35:27.406835     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.269703  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:32 old-k8s-version-490121 kubelet[662]: E0307 19:35:32.407052     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.270066  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:40 old-k8s-version-490121 kubelet[662]: E0307 19:35:40.404052     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.270279  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:45 old-k8s-version-490121 kubelet[662]: E0307 19:35:45.404603     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.270632  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:52 old-k8s-version-490121 kubelet[662]: E0307 19:35:52.404806     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.270841  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:56 old-k8s-version-490121 kubelet[662]: E0307 19:35:56.408243     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.271203  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: E0307 19:36:05.404245     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.271558  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:09 old-k8s-version-490121 kubelet[662]: E0307 19:36:09.404371     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 19:36:12.271580  762831 logs.go:123] Gathering logs for etcd [05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190] ...
	I0307 19:36:12.271605  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190"
	I0307 19:36:12.320083  762831 logs.go:123] Gathering logs for kube-scheduler [156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4] ...
	I0307 19:36:12.320115  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4"
	I0307 19:36:12.372179  762831 logs.go:123] Gathering logs for kindnet [1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29] ...
	I0307 19:36:12.372386  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29"
	I0307 19:36:12.424384  762831 logs.go:123] Gathering logs for storage-provisioner [ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa] ...
	I0307 19:36:12.424413  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa"
	I0307 19:36:12.473228  762831 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:12.473254  762831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 19:36:12.473310  762831 out.go:239] X Problems detected in kubelet:
	W0307 19:36:12.473329  762831 out.go:239]   Mar 07 19:35:45 old-k8s-version-490121 kubelet[662]: E0307 19:35:45.404603     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.473339  762831 out.go:239]   Mar 07 19:35:52 old-k8s-version-490121 kubelet[662]: E0307 19:35:52.404806     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.473348  762831 out.go:239]   Mar 07 19:35:56 old-k8s-version-490121 kubelet[662]: E0307 19:35:56.408243     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:12.473363  762831 out.go:239]   Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: E0307 19:36:05.404245     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:12.473369  762831 out.go:239]   Mar 07 19:36:09 old-k8s-version-490121 kubelet[662]: E0307 19:36:09.404371     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 19:36:12.473377  762831 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:12.473382  762831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:12.210091  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:12.710748  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:13.210547  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:13.710781  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:14.210651  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:14.710863  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:15.210137  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:15.710393  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:16.210113  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:16.710723  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:17.210816  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:17.710738  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:18.210369  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:18.710992  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:19.210261  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:19.710972  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:20.210057  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:20.710270  771734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:36:20.817387  771734 kubeadm.go:1106] duration metric: took 12.330406209s to wait for elevateKubeSystemPrivileges
	W0307 19:36:20.817424  771734 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 19:36:20.817433  771734 kubeadm.go:393] duration metric: took 32.24545592s to StartCluster
	I0307 19:36:20.817450  771734 settings.go:142] acquiring lock: {Name:mkebfa804b6349436c6d99572f0f0da9cb5ad1f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:36:20.817550  771734 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 19:36:20.818982  771734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/kubeconfig: {Name:mk6862a934ece36327360ff645a33ee6e04a2f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:36:20.819199  771734 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 19:36:20.821512  771734 out.go:177] * Verifying Kubernetes components...
	I0307 19:36:20.819321  771734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 19:36:20.819484  771734 config.go:182] Loaded profile config "embed-certs-327564": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 19:36:20.819493  771734 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 19:36:20.823483  771734 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-327564"
	I0307 19:36:20.823509  771734 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-327564"
	I0307 19:36:20.823515  771734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:36:20.823538  771734 host.go:66] Checking if "embed-certs-327564" exists ...
	I0307 19:36:20.823689  771734 addons.go:69] Setting default-storageclass=true in profile "embed-certs-327564"
	I0307 19:36:20.823710  771734 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-327564"
	I0307 19:36:20.823976  771734 cli_runner.go:164] Run: docker container inspect embed-certs-327564 --format={{.State.Status}}
	I0307 19:36:20.824029  771734 cli_runner.go:164] Run: docker container inspect embed-certs-327564 --format={{.State.Status}}
	I0307 19:36:20.857965  771734 addons.go:234] Setting addon default-storageclass=true in "embed-certs-327564"
	I0307 19:36:20.858005  771734 host.go:66] Checking if "embed-certs-327564" exists ...
	I0307 19:36:20.858433  771734 cli_runner.go:164] Run: docker container inspect embed-certs-327564 --format={{.State.Status}}
	I0307 19:36:20.870983  771734 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:36:20.873019  771734 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:36:20.873043  771734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 19:36:20.873114  771734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327564
	I0307 19:36:20.898501  771734 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 19:36:20.898522  771734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 19:36:20.898586  771734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327564
	I0307 19:36:20.913533  771734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/embed-certs-327564/id_rsa Username:docker}
	I0307 19:36:20.932599  771734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/embed-certs-327564/id_rsa Username:docker}
	I0307 19:36:21.035830  771734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:36:21.132514  771734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 19:36:21.133934  771734 node_ready.go:35] waiting up to 6m0s for node "embed-certs-327564" to be "Ready" ...
	I0307 19:36:21.153599  771734 node_ready.go:49] node "embed-certs-327564" has status "Ready":"True"
	I0307 19:36:21.153677  771734 node_ready.go:38] duration metric: took 19.657606ms for node "embed-certs-327564" to be "Ready" ...
	I0307 19:36:21.153710  771734 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 19:36:21.162737  771734 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7bbqp" in "kube-system" namespace to be "Ready" ...
	I0307 19:36:21.195702  771734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 19:36:21.224311  771734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:36:21.730101  771734 start.go:948] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0307 19:36:22.120906  771734 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0307 19:36:22.474361  762831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:36:22.494422  762831 api_server.go:72] duration metric: took 5m48.072661998s to wait for apiserver process to appear ...
	I0307 19:36:22.494445  762831 api_server.go:88] waiting for apiserver healthz status ...
	I0307 19:36:22.494486  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 19:36:22.494542  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 19:36:22.559595  762831 cri.go:89] found id: "ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020"
	I0307 19:36:22.559616  762831 cri.go:89] found id: "99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939"
	I0307 19:36:22.559624  762831 cri.go:89] found id: ""
	I0307 19:36:22.559632  762831 logs.go:276] 2 containers: [ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020 99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939]
	I0307 19:36:22.559686  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.563850  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.568388  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 19:36:22.568464  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 19:36:22.628461  762831 cri.go:89] found id: "5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d"
	I0307 19:36:22.628491  762831 cri.go:89] found id: "05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190"
	I0307 19:36:22.628496  762831 cri.go:89] found id: ""
	I0307 19:36:22.628504  762831 logs.go:276] 2 containers: [5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d 05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190]
	I0307 19:36:22.628558  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.632518  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.636566  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 19:36:22.636648  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 19:36:22.716216  762831 cri.go:89] found id: "9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b"
	I0307 19:36:22.716236  762831 cri.go:89] found id: "f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384"
	I0307 19:36:22.716241  762831 cri.go:89] found id: ""
	I0307 19:36:22.716248  762831 logs.go:276] 2 containers: [9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384]
	I0307 19:36:22.716303  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.720114  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.726286  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 19:36:22.726389  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 19:36:22.775384  762831 cri.go:89] found id: "62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f"
	I0307 19:36:22.775417  762831 cri.go:89] found id: "156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4"
	I0307 19:36:22.775423  762831 cri.go:89] found id: ""
	I0307 19:36:22.775431  762831 logs.go:276] 2 containers: [62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f 156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4]
	I0307 19:36:22.775535  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.779574  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.783105  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 19:36:22.783214  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 19:36:22.834745  762831 cri.go:89] found id: "eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5"
	I0307 19:36:22.834766  762831 cri.go:89] found id: "5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907"
	I0307 19:36:22.834771  762831 cri.go:89] found id: ""
	I0307 19:36:22.834778  762831 logs.go:276] 2 containers: [eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5 5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907]
	I0307 19:36:22.834885  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.838940  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.842643  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 19:36:22.842766  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 19:36:22.885986  762831 cri.go:89] found id: "2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916"
	I0307 19:36:22.886051  762831 cri.go:89] found id: "f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08"
	I0307 19:36:22.886073  762831 cri.go:89] found id: ""
	I0307 19:36:22.886087  762831 logs.go:276] 2 containers: [2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916 f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08]
	I0307 19:36:22.886147  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.889768  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.893174  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 19:36:22.893250  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 19:36:22.932715  762831 cri.go:89] found id: "18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57"
	I0307 19:36:22.932734  762831 cri.go:89] found id: "1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29"
	I0307 19:36:22.932739  762831 cri.go:89] found id: ""
	I0307 19:36:22.932746  762831 logs.go:276] 2 containers: [18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57 1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29]
	I0307 19:36:22.932801  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.936637  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.940288  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 19:36:22.940389  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 19:36:22.982459  762831 cri.go:89] found id: "ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa"
	I0307 19:36:22.982483  762831 cri.go:89] found id: "f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093"
	I0307 19:36:22.982488  762831 cri.go:89] found id: ""
	I0307 19:36:22.982495  762831 logs.go:276] 2 containers: [ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093]
	I0307 19:36:22.982568  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.986251  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:22.989380  762831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 19:36:22.989460  762831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 19:36:23.030709  762831 cri.go:89] found id: "90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a"
	I0307 19:36:23.030732  762831 cri.go:89] found id: ""
	I0307 19:36:23.030739  762831 logs.go:276] 1 containers: [90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a]
	I0307 19:36:23.030816  762831 ssh_runner.go:195] Run: which crictl
	I0307 19:36:23.034771  762831 logs.go:123] Gathering logs for coredns [9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b] ...
	I0307 19:36:23.034798  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b"
	I0307 19:36:23.078994  762831 logs.go:123] Gathering logs for kube-scheduler [62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f] ...
	I0307 19:36:23.079022  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f"
	I0307 19:36:23.126348  762831 logs.go:123] Gathering logs for kube-proxy [eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5] ...
	I0307 19:36:23.126376  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5"
	I0307 19:36:23.172310  762831 logs.go:123] Gathering logs for storage-provisioner [f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093] ...
	I0307 19:36:23.172375  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093"
	I0307 19:36:23.212617  762831 logs.go:123] Gathering logs for containerd ...
	I0307 19:36:23.212684  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 19:36:23.282162  762831 logs.go:123] Gathering logs for kubelet ...
	I0307 19:36:23.282199  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 19:36:23.342202  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562011     662 reflector.go:138] object-"kube-system"/"kube-proxy-token-vdtz2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vdtz2" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.342735  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562119     662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.343078  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562202     662 reflector.go:138] object-"kube-system"/"coredns-token-zckzk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zckzk" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.343584  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562259     662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.343891  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562331     662 reflector.go:138] object-"kube-system"/"kindnet-token-g7knv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g7knv" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.344406  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562675     662 reflector.go:138] object-"default"/"default-token-rgss9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rgss9" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.344748  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.562749     662 reflector.go:138] object-"kube-system"/"metrics-server-token-t79rr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t79rr" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.345285  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:51 old-k8s-version-490121 kubelet[662]: E0307 19:30:51.672443     662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-k4nsq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-k4nsq" is forbidden: User "system:node:old-k8s-version-490121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-490121' and this object
	W0307 19:36:23.353834  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:54 old-k8s-version-490121 kubelet[662]: E0307 19:30:54.291729     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:23.354037  762831 logs.go:138] Found kubelet problem: Mar 07 19:30:54 old-k8s-version-490121 kubelet[662]: E0307 19:30:54.749946     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.356811  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:07 old-k8s-version-490121 kubelet[662]: E0307 19:31:07.413698     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:23.358512  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:20 old-k8s-version-490121 kubelet[662]: E0307 19:31:20.430619     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.359433  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:23 old-k8s-version-490121 kubelet[662]: E0307 19:31:23.898812     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.359765  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:24 old-k8s-version-490121 kubelet[662]: E0307 19:31:24.894861     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.360200  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:25 old-k8s-version-490121 kubelet[662]: E0307 19:31:25.899230     662 pod_workers.go:191] Error syncing pod 57707985-e3ad-463c-a7a9-150bfe271af7 ("storage-provisioner_kube-system(57707985-e3ad-463c-a7a9-150bfe271af7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(57707985-e3ad-463c-a7a9-150bfe271af7)"
	W0307 19:36:23.364160  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:31 old-k8s-version-490121 kubelet[662]: E0307 19:31:31.411639     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:23.364506  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:32 old-k8s-version-490121 kubelet[662]: E0307 19:31:32.839916     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.365283  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:43 old-k8s-version-490121 kubelet[662]: E0307 19:31:43.406752     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.365814  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:43 old-k8s-version-490121 kubelet[662]: E0307 19:31:43.954053     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.366145  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:52 old-k8s-version-490121 kubelet[662]: E0307 19:31:52.840518     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.366334  762831 logs.go:138] Found kubelet problem: Mar 07 19:31:57 old-k8s-version-490121 kubelet[662]: E0307 19:31:57.404922     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.366928  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:08 old-k8s-version-490121 kubelet[662]: E0307 19:32:08.021627     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.367115  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:08 old-k8s-version-490121 kubelet[662]: E0307 19:32:08.404313     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.367441  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:12 old-k8s-version-490121 kubelet[662]: E0307 19:32:12.840494     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.369872  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:19 old-k8s-version-490121 kubelet[662]: E0307 19:32:19.412896     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:23.370699  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:25 old-k8s-version-490121 kubelet[662]: E0307 19:32:25.404097     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.370901  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:34 old-k8s-version-490121 kubelet[662]: E0307 19:32:34.404539     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.371238  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:38 old-k8s-version-490121 kubelet[662]: E0307 19:32:38.404439     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.371423  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:49 old-k8s-version-490121 kubelet[662]: E0307 19:32:49.404401     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.372010  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:51 old-k8s-version-490121 kubelet[662]: E0307 19:32:51.130308     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.372334  762831 logs.go:138] Found kubelet problem: Mar 07 19:32:52 old-k8s-version-490121 kubelet[662]: E0307 19:32:52.840092     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.372514  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:02 old-k8s-version-490121 kubelet[662]: E0307 19:33:02.404637     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.372838  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:05 old-k8s-version-490121 kubelet[662]: E0307 19:33:05.404490     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.373020  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:14 old-k8s-version-490121 kubelet[662]: E0307 19:33:14.404856     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.373343  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:17 old-k8s-version-490121 kubelet[662]: E0307 19:33:17.403976     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.373536  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:27 old-k8s-version-490121 kubelet[662]: E0307 19:33:27.404480     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.373887  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:30 old-k8s-version-490121 kubelet[662]: E0307 19:33:30.404449     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.374213  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:41 old-k8s-version-490121 kubelet[662]: E0307 19:33:41.404057     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.376719  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:42 old-k8s-version-490121 kubelet[662]: E0307 19:33:42.415836     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0307 19:36:23.377054  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:52 old-k8s-version-490121 kubelet[662]: E0307 19:33:52.404978     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.377238  762831 logs.go:138] Found kubelet problem: Mar 07 19:33:55 old-k8s-version-490121 kubelet[662]: E0307 19:33:55.404850     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.377572  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:07 old-k8s-version-490121 kubelet[662]: E0307 19:34:07.404060     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.377754  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:09 old-k8s-version-490121 kubelet[662]: E0307 19:34:09.404722     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.378206  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:22 old-k8s-version-490121 kubelet[662]: E0307 19:34:22.325297     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.378658  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:23 old-k8s-version-490121 kubelet[662]: E0307 19:34:23.328654     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.378843  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:23 old-k8s-version-490121 kubelet[662]: E0307 19:34:23.404300     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.379171  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:34 old-k8s-version-490121 kubelet[662]: E0307 19:34:34.405645     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.379355  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:35 old-k8s-version-490121 kubelet[662]: E0307 19:34:35.404699     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.379679  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:47 old-k8s-version-490121 kubelet[662]: E0307 19:34:47.404007     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.379860  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:49 old-k8s-version-490121 kubelet[662]: E0307 19:34:49.404434     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.380186  762831 logs.go:138] Found kubelet problem: Mar 07 19:34:59 old-k8s-version-490121 kubelet[662]: E0307 19:34:59.404258     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.380367  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:02 old-k8s-version-490121 kubelet[662]: E0307 19:35:02.405853     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.380716  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:12 old-k8s-version-490121 kubelet[662]: E0307 19:35:12.405265     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.380901  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:17 old-k8s-version-490121 kubelet[662]: E0307 19:35:17.404385     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.381224  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:27 old-k8s-version-490121 kubelet[662]: E0307 19:35:27.406835     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.381405  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:32 old-k8s-version-490121 kubelet[662]: E0307 19:35:32.407052     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.381735  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:40 old-k8s-version-490121 kubelet[662]: E0307 19:35:40.404052     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.382360  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:45 old-k8s-version-490121 kubelet[662]: E0307 19:35:45.404603     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.382702  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:52 old-k8s-version-490121 kubelet[662]: E0307 19:35:52.404806     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.382887  762831 logs.go:138] Found kubelet problem: Mar 07 19:35:56 old-k8s-version-490121 kubelet[662]: E0307 19:35:56.408243     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.383217  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: E0307 19:36:05.404245     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.383399  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:09 old-k8s-version-490121 kubelet[662]: E0307 19:36:09.404371     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:23.383723  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:19 old-k8s-version-490121 kubelet[662]: E0307 19:36:19.404863     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:23.383904  762831 logs.go:138] Found kubelet problem: Mar 07 19:36:20 old-k8s-version-490121 kubelet[662]: E0307 19:36:20.420688     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 19:36:23.383915  762831 logs.go:123] Gathering logs for kube-apiserver [ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020] ...
	I0307 19:36:23.383933  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020"
	I0307 19:36:23.476177  762831 logs.go:123] Gathering logs for kube-controller-manager [f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08] ...
	I0307 19:36:23.476227  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08"
	I0307 19:36:23.572070  762831 logs.go:123] Gathering logs for storage-provisioner [ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa] ...
	I0307 19:36:23.572107  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa"
	I0307 19:36:23.617823  762831 logs.go:123] Gathering logs for kubernetes-dashboard [90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a] ...
	I0307 19:36:23.617853  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a"
	I0307 19:36:23.696040  762831 logs.go:123] Gathering logs for container status ...
	I0307 19:36:23.696108  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:36:23.762456  762831 logs.go:123] Gathering logs for kube-apiserver [99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939] ...
	I0307 19:36:23.762532  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939"
	I0307 19:36:23.835674  762831 logs.go:123] Gathering logs for etcd [05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190] ...
	I0307 19:36:23.835750  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190"
	I0307 19:36:23.903568  762831 logs.go:123] Gathering logs for coredns [f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384] ...
	I0307 19:36:23.903699  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384"
	I0307 19:36:23.952099  762831 logs.go:123] Gathering logs for kindnet [18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57] ...
	I0307 19:36:23.952172  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57"
	I0307 19:36:24.014266  762831 logs.go:123] Gathering logs for kindnet [1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29] ...
	I0307 19:36:24.014344  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29"
	I0307 19:36:24.149466  762831 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:36:24.149546  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:36:24.364732  762831 logs.go:123] Gathering logs for etcd [5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d] ...
	I0307 19:36:24.364896  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d"
	I0307 19:36:24.422381  762831 logs.go:123] Gathering logs for kube-proxy [5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907] ...
	I0307 19:36:24.422452  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907"
	I0307 19:36:24.472397  762831 logs.go:123] Gathering logs for kube-controller-manager [2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916] ...
	I0307 19:36:24.472461  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916"
	I0307 19:36:24.554174  762831 logs.go:123] Gathering logs for dmesg ...
	I0307 19:36:24.554248  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:36:24.577804  762831 logs.go:123] Gathering logs for kube-scheduler [156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4] ...
	I0307 19:36:24.577881  762831 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4"
	I0307 19:36:24.642561  762831 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:24.642626  762831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 19:36:24.642701  762831 out.go:239] X Problems detected in kubelet:
	W0307 19:36:24.642745  762831 out.go:239]   Mar 07 19:35:56 old-k8s-version-490121 kubelet[662]: E0307 19:35:56.408243     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:24.642899  762831 out.go:239]   Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: E0307 19:36:05.404245     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:24.642937  762831 out.go:239]   Mar 07 19:36:09 old-k8s-version-490121 kubelet[662]: E0307 19:36:09.404371     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 19:36:24.643025  762831 out.go:239]   Mar 07 19:36:19 old-k8s-version-490121 kubelet[662]: E0307 19:36:19.404863     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	W0307 19:36:24.643060  762831 out.go:239]   Mar 07 19:36:20 old-k8s-version-490121 kubelet[662]: E0307 19:36:20.420688     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 19:36:24.643094  762831 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:24.643117  762831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:22.123372  771734 addons.go:505] duration metric: took 1.303869004s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0307 19:36:22.234856  771734 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-327564" context rescaled to 1 replicas
	I0307 19:36:23.170538  771734 pod_ready.go:102] pod "coredns-5dd5756b68-7bbqp" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:25.669208  771734 pod_ready.go:102] pod "coredns-5dd5756b68-7bbqp" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:27.669815  771734 pod_ready.go:102] pod "coredns-5dd5756b68-7bbqp" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:30.170553  771734 pod_ready.go:102] pod "coredns-5dd5756b68-7bbqp" in "kube-system" namespace has status "Ready":"False"
	I0307 19:36:34.644283  762831 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0307 19:36:34.654169  762831 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0307 19:36:34.656597  762831 out.go:177] 
	W0307 19:36:34.658535  762831 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0307 19:36:34.658604  762831 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0307 19:36:34.658626  762831 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0307 19:36:34.658632  762831 out.go:239] * 
	W0307 19:36:34.659694  762831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:36:34.662993  762831 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	a6084cdde944f       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   5dc0d6ef12e44       dashboard-metrics-scraper-8d5bb5db8-w2dn4
	ef7f1755441a0       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   63cf698ab6bf4       storage-provisioner
	90801d79bb21b       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   a888f023f13f0       kubernetes-dashboard-cd95d586-pqw67
	d0441d0ae48e3       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   84448e0067aab       busybox
	f1273e5fc5a03       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   63cf698ab6bf4       storage-provisioner
	eef127c3b8686       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   9e15124462ded       kube-proxy-5rbpn
	18ade5d6b5c4c       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   e1c85816835c9       kindnet-mgtjt
	9a2f527ecba34       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   797b3f8a9f4e8       coredns-74ff55c5b-mskrg
	ca7ce18516e00       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   28ef2f2ba5200       kube-apiserver-old-k8s-version-490121
	62592ab3b7ee4       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   b06031c31470a       kube-scheduler-old-k8s-version-490121
	2ab85d0a64427       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   2061ba7118f7b       kube-controller-manager-old-k8s-version-490121
	5d6a1009f3a73       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   5694f449f4130       etcd-old-k8s-version-490121
	378a2b8295ca6       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   dc5ecff6c4b62       busybox
	f4e00b3f8611c       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   dcb87b2351872       coredns-74ff55c5b-mskrg
	5c8bdc8e9ddfa       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   18120dda8fd8d       kube-proxy-5rbpn
	1a64bbfdc2b02       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   097e98a4a65f4       kindnet-mgtjt
	99dfdaab36b9d       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   40d81581b981a       kube-apiserver-old-k8s-version-490121
	156315217b9e0       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   73d02cf9f767b       kube-scheduler-old-k8s-version-490121
	f425b02ac359c       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   96d681c6ef23e       kube-controller-manager-old-k8s-version-490121
	05b5946f07423       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   10accc58b0d6b       etcd-old-k8s-version-490121
	
	
	==> containerd <==
	Mar 07 19:32:50 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:32:50.440571894Z" level=info msg="CreateContainer within sandbox \"5dc0d6ef12e443a853686be440a14e209f52876f39b35e7d1015d003238195a5\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"0582a2756b43a22a7724205dce09f283b2c6e2e195a3891d5f3b3bbdd61ced64\""
	Mar 07 19:32:50 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:32:50.442533508Z" level=info msg="StartContainer for \"0582a2756b43a22a7724205dce09f283b2c6e2e195a3891d5f3b3bbdd61ced64\""
	Mar 07 19:32:50 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:32:50.506671313Z" level=info msg="StartContainer for \"0582a2756b43a22a7724205dce09f283b2c6e2e195a3891d5f3b3bbdd61ced64\" returns successfully"
	Mar 07 19:32:50 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:32:50.536232803Z" level=info msg="shim disconnected" id=0582a2756b43a22a7724205dce09f283b2c6e2e195a3891d5f3b3bbdd61ced64
	Mar 07 19:32:50 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:32:50.536455112Z" level=warning msg="cleaning up after shim disconnected" id=0582a2756b43a22a7724205dce09f283b2c6e2e195a3891d5f3b3bbdd61ced64 namespace=k8s.io
	Mar 07 19:32:50 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:32:50.536483953Z" level=info msg="cleaning up dead shim"
	Mar 07 19:32:50 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:32:50.544851062Z" level=warning msg="cleanup warnings time=\"2024-03-07T19:32:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2936 runtime=io.containerd.runc.v2\n"
	Mar 07 19:32:51 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:32:51.132138052Z" level=info msg="RemoveContainer for \"9124327f2d97792e01774b6641723105520a1b2505cae69e68b96ed4b201e36c\""
	Mar 07 19:32:51 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:32:51.147697074Z" level=info msg="RemoveContainer for \"9124327f2d97792e01774b6641723105520a1b2505cae69e68b96ed4b201e36c\" returns successfully"
	Mar 07 19:33:42 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:33:42.405636840Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 19:33:42 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:33:42.412435129Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Mar 07 19:33:42 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:33:42.414277589Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Mar 07 19:34:21 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:34:21.406926154Z" level=info msg="CreateContainer within sandbox \"5dc0d6ef12e443a853686be440a14e209f52876f39b35e7d1015d003238195a5\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Mar 07 19:34:21 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:34:21.423134820Z" level=info msg="CreateContainer within sandbox \"5dc0d6ef12e443a853686be440a14e209f52876f39b35e7d1015d003238195a5\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631\""
	Mar 07 19:34:21 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:34:21.424031275Z" level=info msg="StartContainer for \"a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631\""
	Mar 07 19:34:21 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:34:21.486547955Z" level=info msg="StartContainer for \"a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631\" returns successfully"
	Mar 07 19:34:21 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:34:21.512644362Z" level=info msg="shim disconnected" id=a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631
	Mar 07 19:34:21 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:34:21.512720440Z" level=warning msg="cleaning up after shim disconnected" id=a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631 namespace=k8s.io
	Mar 07 19:34:21 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:34:21.512734093Z" level=info msg="cleaning up dead shim"
	Mar 07 19:34:21 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:34:21.521908041Z" level=warning msg="cleanup warnings time=\"2024-03-07T19:34:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3195 runtime=io.containerd.runc.v2\n"
	Mar 07 19:34:22 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:34:22.329707522Z" level=info msg="RemoveContainer for \"0582a2756b43a22a7724205dce09f283b2c6e2e195a3891d5f3b3bbdd61ced64\""
	Mar 07 19:34:22 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:34:22.335280776Z" level=info msg="RemoveContainer for \"0582a2756b43a22a7724205dce09f283b2c6e2e195a3891d5f3b3bbdd61ced64\" returns successfully"
	Mar 07 19:36:35 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:36:35.405322296Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 19:36:35 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:36:35.421868846Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Mar 07 19:36:35 old-k8s-version-490121 containerd[568]: time="2024-03-07T19:36:35.423941959Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> coredns [9a2f527ecba342bae8ed8f07e3dacc6fe618fe87d31fa4c8f552ebffaa55599b] <==
	I0307 19:31:24.683345       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-07 19:30:54.682736585 +0000 UTC m=+0.084033635) (total time: 30.000490878s):
	Trace[2019727887]: [30.000490878s] [30.000490878s] END
	E0307 19:31:24.683376       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0307 19:31:24.683474       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-07 19:30:54.683241321 +0000 UTC m=+0.084538371) (total time: 30.000221849s):
	Trace[939984059]: [30.000221849s] [30.000221849s] END
	E0307 19:31:24.683478       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0307 19:31:24.694389       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-07 19:30:54.693912173 +0000 UTC m=+0.095209223) (total time: 30.000448203s):
	Trace[1474941318]: [30.000448203s] [30.000448203s] END
	E0307 19:31:24.694409       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:34036 - 17 "HINFO IN 3603254449104899921.6502033093172393733. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022910074s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f4e00b3f8611c41774e97ced61d813c222e4c477311f777ff32be35b75f70384] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:45727 - 6069 "HINFO IN 3740224681191150328.849708405113895451. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02425604s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-490121
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-490121
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=526fad16cb967ea3a5b243df32efb88cb58b81ec
	                    minikube.k8s.io/name=old-k8s-version-490121
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T19_27_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 19:27:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-490121
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 19:36:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 19:31:52 +0000   Thu, 07 Mar 2024 19:27:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 19:31:52 +0000   Thu, 07 Mar 2024 19:27:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 19:31:52 +0000   Thu, 07 Mar 2024 19:27:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 19:31:52 +0000   Thu, 07 Mar 2024 19:28:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-490121
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 facb49c7e4f543eb91788bc786542810
	  System UUID:                580b3f60-f802-4435-bab4-1dc7cfcb4a04
	  Boot ID:                    a949ea88-4a69-4ab0-89c5-986450203265
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m33s
	  kube-system                 coredns-74ff55c5b-mskrg                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m28s
	  kube-system                 etcd-old-k8s-version-490121                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m34s
	  kube-system                 kindnet-mgtjt                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m28s
	  kube-system                 kube-apiserver-old-k8s-version-490121             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 kube-controller-manager-old-k8s-version-490121    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 kube-proxy-5rbpn                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 kube-scheduler-old-k8s-version-490121             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 metrics-server-9975d5f86-w9hn6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m22s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-w2dn4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-pqw67               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m54s (x5 over 8m55s)  kubelet     Node old-k8s-version-490121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m54s (x4 over 8m55s)  kubelet     Node old-k8s-version-490121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m54s (x4 over 8m55s)  kubelet     Node old-k8s-version-490121 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m35s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m35s                  kubelet     Node old-k8s-version-490121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s                  kubelet     Node old-k8s-version-490121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s                  kubelet     Node old-k8s-version-490121 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m28s                  kubelet     Node old-k8s-version-490121 status is now: NodeReady
	  Normal  Starting                 8m25s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m54s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m54s (x8 over 5m54s)  kubelet     Node old-k8s-version-490121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s (x7 over 5m54s)  kubelet     Node old-k8s-version-490121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s (x8 over 5m54s)  kubelet     Node old-k8s-version-490121 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m54s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m41s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001107] FS-Cache: O-key=[8] 'd83c5c0100000000'
	[  +0.000774] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001034] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=00000000bb5cdba7
	[  +0.001104] FS-Cache: N-key=[8] 'd83c5c0100000000'
	[  +0.003880] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=00000000860da42d
	[  +0.001206] FS-Cache: O-key=[8] 'd83c5c0100000000'
	[  +0.000720] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=000000002d1b8011
	[  +0.001027] FS-Cache: N-key=[8] 'd83c5c0100000000'
	[  +2.780951] FS-Cache: Duplicate cookie detected
	[  +0.000732] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001109] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=000000003a76bb30
	[  +0.001108] FS-Cache: O-key=[8] 'd73c5c0100000000'
	[  +0.000779] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000945] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=00000000929d8fe2
	[  +0.001069] FS-Cache: N-key=[8] 'd73c5c0100000000'
	[  +0.306498] FS-Cache: Duplicate cookie detected
	[  +0.000774] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001003] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=0000000010ad09c7
	[  +0.001118] FS-Cache: O-key=[8] 'dd3c5c0100000000'
	[  +0.000756] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=000000002bfb4aa6
	[  +0.001033] FS-Cache: N-key=[8] 'dd3c5c0100000000'
	
	
	==> etcd [05b5946f074233e2a2ab87112eb492b37e819fe68a625e504d3d1683dafbb190] <==
	raft2024/03/07 19:27:42 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/03/07 19:27:42 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/03/07 19:27:42 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-03-07 19:27:42.837755 I | etcdserver: setting up the initial cluster version to 3.4
	2024-03-07 19:27:42.838564 I | etcdserver: published {Name:old-k8s-version-490121 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-03-07 19:27:42.838709 I | embed: ready to serve client requests
	2024-03-07 19:27:42.842489 I | embed: serving client requests on 192.168.85.2:2379
	2024-03-07 19:27:42.842799 I | embed: ready to serve client requests
	2024-03-07 19:27:42.846604 I | embed: serving client requests on 127.0.0.1:2379
	2024-03-07 19:27:42.852854 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-03-07 19:27:42.866567 I | etcdserver/api: enabled capabilities for version 3.4
	2024-03-07 19:28:05.809677 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:28:12.388816 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:28:22.388086 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:28:32.388212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:28:42.391309 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:28:52.388009 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:29:02.388062 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:29:12.388220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:29:22.388223 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:29:32.388059 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:29:42.388082 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:29:52.388181 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:30:02.388192 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:30:12.387944 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [5d6a1009f3a738e6134be28d53df0c689221a6f5c60ec491f9c60aba463a073d] <==
	2024-03-07 19:32:29.366319 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:32:39.366346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:32:49.366477 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:32:59.366387 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:33:09.366329 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:33:19.366347 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:33:29.366392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:33:39.366546 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:33:49.366346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:33:59.366357 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:34:09.366370 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:34:19.366481 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:34:29.366378 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:34:39.366843 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:34:49.366437 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:34:59.366301 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:35:09.366393 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:35:19.366703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:35:29.366991 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:35:39.366268 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:35:49.366458 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:35:59.366382 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:36:09.366492 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:36:19.366462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 19:36:29.366343 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 19:36:36 up  3:19,  0 users,  load average: 1.65, 2.00, 2.52
	Linux old-k8s-version-490121 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [18ade5d6b5c4c10eef9993a7987d4939f332c8517182d15c717de0c94fa32e57] <==
	I0307 19:34:35.484586       1 main.go:227] handling current node
	I0307 19:34:45.500131       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:34:45.500155       1 main.go:227] handling current node
	I0307 19:34:55.511003       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:34:55.511030       1 main.go:227] handling current node
	I0307 19:35:05.530729       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:35:05.530772       1 main.go:227] handling current node
	I0307 19:35:15.542990       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:35:15.543022       1 main.go:227] handling current node
	I0307 19:35:25.553743       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:35:25.553837       1 main.go:227] handling current node
	I0307 19:35:35.568826       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:35:35.568859       1 main.go:227] handling current node
	I0307 19:35:45.583843       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:35:45.584269       1 main.go:227] handling current node
	I0307 19:35:55.594463       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:35:55.594494       1 main.go:227] handling current node
	I0307 19:36:05.616186       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:36:05.616213       1 main.go:227] handling current node
	I0307 19:36:15.627573       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:36:15.627601       1 main.go:227] handling current node
	I0307 19:36:25.644676       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:36:25.644710       1 main.go:227] handling current node
	I0307 19:36:35.662358       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:36:35.662385       1 main.go:227] handling current node
	
	
	==> kindnet [1a64bbfdc2b02903957dd7d0b87bd907abe69379f055630a01c14b476ce40b29] <==
	podIP = 192.168.85.2
	I0307 19:28:09.918950       1 main.go:116] setting mtu 1500 for CNI 
	I0307 19:28:09.918972       1 main.go:146] kindnetd IP family: "ipv4"
	I0307 19:28:09.918986       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0307 19:28:40.153742       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0307 19:28:40.167015       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:28:40.167049       1 main.go:227] handling current node
	I0307 19:28:50.183769       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:28:50.184937       1 main.go:227] handling current node
	I0307 19:29:00.220768       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:29:00.220868       1 main.go:227] handling current node
	I0307 19:29:10.233202       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:29:10.233783       1 main.go:227] handling current node
	I0307 19:29:20.259957       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:29:20.259990       1 main.go:227] handling current node
	I0307 19:29:30.290317       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:29:30.290427       1 main.go:227] handling current node
	I0307 19:29:40.295095       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:29:40.295431       1 main.go:227] handling current node
	I0307 19:29:50.333811       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:29:50.333838       1 main.go:227] handling current node
	I0307 19:30:00.415622       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:30:00.415889       1 main.go:227] handling current node
	I0307 19:30:10.420304       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0307 19:30:10.420338       1 main.go:227] handling current node
	
	
	==> kube-apiserver [99dfdaab36b9d1de349d9550c59c8c3f81df15ddb7e0c5b623692865d7a65939] <==
	I0307 19:27:50.991336       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0307 19:27:50.991359       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0307 19:27:51.482669       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0307 19:27:51.531207       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0307 19:27:51.673965       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0307 19:27:51.675039       1 controller.go:606] quota admission added evaluator for: endpoints
	I0307 19:27:51.681379       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0307 19:27:52.688168       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0307 19:27:53.226933       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0307 19:27:53.321460       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0307 19:28:01.688944       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0307 19:28:08.916747       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0307 19:28:08.922266       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0307 19:28:25.742778       1 client.go:360] parsed scheme: "passthrough"
	I0307 19:28:25.742819       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 19:28:25.742961       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 19:28:57.766070       1 client.go:360] parsed scheme: "passthrough"
	I0307 19:28:57.766129       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 19:28:57.766138       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 19:29:27.818863       1 client.go:360] parsed scheme: "passthrough"
	I0307 19:29:27.818911       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 19:29:27.819064       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 19:29:59.033907       1 client.go:360] parsed scheme: "passthrough"
	I0307 19:29:59.034108       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 19:29:59.034231       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [ca7ce18516e00288ccaa256e5bd851dd1cd369b890c3a9a183165bdc13389020] <==
	I0307 19:33:14.826605       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 19:33:14.826614       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 19:33:46.089094       1 client.go:360] parsed scheme: "passthrough"
	I0307 19:33:46.089145       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 19:33:46.089154       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0307 19:33:55.153102       1 handler_proxy.go:102] no RequestInfo found in the context
	E0307 19:33:55.153183       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0307 19:33:55.153194       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0307 19:34:25.204004       1 client.go:360] parsed scheme: "passthrough"
	I0307 19:34:25.204049       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 19:34:25.204086       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 19:35:03.886558       1 client.go:360] parsed scheme: "passthrough"
	I0307 19:35:03.886604       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 19:35:03.886646       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 19:35:41.868433       1 client.go:360] parsed scheme: "passthrough"
	I0307 19:35:41.868478       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 19:35:41.868631       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0307 19:35:52.283032       1 handler_proxy.go:102] no RequestInfo found in the context
	E0307 19:35:52.283300       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0307 19:35:52.283318       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0307 19:36:11.916104       1 client.go:360] parsed scheme: "passthrough"
	I0307 19:36:11.916167       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 19:36:11.916176       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [2ab85d0a644279199f2ca1291b3a03fe47b2943a8b80b2d9fc03bc1d42b67916] <==
	W0307 19:32:15.836002       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 19:32:41.889909       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 19:32:47.486198       1 request.go:655] Throttling request took 1.04741415s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 19:32:48.337895       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 19:33:12.391531       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 19:33:19.988483       1 request.go:655] Throttling request took 1.048530027s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0307 19:33:20.840002       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 19:33:42.893416       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 19:33:52.490428       1 request.go:655] Throttling request took 1.048430389s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v2beta1?timeout=32s
	W0307 19:33:53.341832       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 19:34:13.395391       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 19:34:24.992405       1 request.go:655] Throttling request took 1.0480377s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 19:34:25.843746       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 19:34:43.897276       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 19:34:57.494257       1 request.go:655] Throttling request took 1.048473644s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 19:34:58.345736       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 19:35:14.399076       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 19:35:29.996269       1 request.go:655] Throttling request took 1.039270972s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 19:35:30.847885       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 19:35:44.901273       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 19:36:02.498538       1 request.go:655] Throttling request took 1.048429751s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 19:36:03.350051       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 19:36:15.403061       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 19:36:35.000600       1 request.go:655] Throttling request took 1.048471659s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0307 19:36:35.852904       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [f425b02ac359c5680ff726a835fea08c414b40556ac7ed51243910078d583d08] <==
	I0307 19:28:08.918697       1 shared_informer.go:247] Caches are synced for stateful set 
	I0307 19:28:08.918748       1 shared_informer.go:247] Caches are synced for GC 
	I0307 19:28:08.918788       1 shared_informer.go:247] Caches are synced for job 
	I0307 19:28:08.918863       1 shared_informer.go:247] Caches are synced for resource quota 
	I0307 19:28:08.918930       1 shared_informer.go:247] Caches are synced for taint 
	I0307 19:28:08.919001       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0307 19:28:08.919072       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-490121. Assuming now as a timestamp.
	I0307 19:28:08.919132       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0307 19:28:08.919924       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0307 19:28:08.920253       1 event.go:291] "Event occurred" object="old-k8s-version-490121" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-490121 event: Registered Node old-k8s-version-490121 in Controller"
	I0307 19:28:08.950365       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0307 19:28:08.975523       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5rbpn"
	I0307 19:28:08.975846       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mgtjt"
	I0307 19:28:08.976846       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-b724r"
	I0307 19:28:09.049135       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-mskrg"
	I0307 19:28:09.092816       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E0307 19:28:09.164232       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4d13e347-f351-431f-bb26-082933f88d43", ResourceVersion:"267", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63845436473, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240202-8f1494ea\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001bc4940), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001bc4960)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001bc4980), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001bc49a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001bc49c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001bc49e0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240202-8f1494ea", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001bc4a00)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001bc4a40)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001b09800), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001bd60f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400060bb90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400070f930)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001bd6140)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0307 19:28:09.360071       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0307 19:28:09.360109       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0307 19:28:09.393248       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0307 19:28:10.224939       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0307 19:28:10.237136       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-b724r"
	I0307 19:28:13.919400       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0307 19:30:13.621877       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0307 19:30:13.719539       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [5c8bdc8e9ddfad2452883ea6d34d0adcf50e2a71b16cecbe3f4af6d16f9bc907] <==
	I0307 19:28:11.425045       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0307 19:28:11.425137       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0307 19:28:11.442547       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0307 19:28:11.442684       1 server_others.go:185] Using iptables Proxier.
	I0307 19:28:11.443053       1 server.go:650] Version: v1.20.0
	I0307 19:28:11.443866       1 config.go:315] Starting service config controller
	I0307 19:28:11.443886       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0307 19:28:11.443906       1 config.go:224] Starting endpoint slice config controller
	I0307 19:28:11.443966       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0307 19:28:11.544040       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0307 19:28:11.544090       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [eef127c3b86862c2bec02399b5a5c139fc6fe1d7d6515b98cdd84fef01157dc5] <==
	I0307 19:30:55.335417       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0307 19:30:55.335483       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0307 19:30:55.374020       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0307 19:30:55.374120       1 server_others.go:185] Using iptables Proxier.
	I0307 19:30:55.374337       1 server.go:650] Version: v1.20.0
	I0307 19:30:55.382515       1 config.go:315] Starting service config controller
	I0307 19:30:55.382528       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0307 19:30:55.382546       1 config.go:224] Starting endpoint slice config controller
	I0307 19:30:55.382549       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0307 19:30:55.482812       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0307 19:30:55.482905       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [156315217b9e0bb9b9b06ef2d546b3e5963161197e29029999b4ae9aac09b2d4] <==
	I0307 19:27:45.284494       1 serving.go:331] Generated self-signed cert in-memory
	W0307 19:27:50.168590       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0307 19:27:50.168892       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 19:27:50.170366       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0307 19:27:50.170673       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0307 19:27:50.252904       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0307 19:27:50.254724       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 19:27:50.254957       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 19:27:50.257947       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0307 19:27:50.259897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 19:27:50.259998       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 19:27:50.260073       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 19:27:50.270077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 19:27:50.270426       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 19:27:50.270672       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0307 19:27:50.270904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0307 19:27:50.271187       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 19:27:50.271522       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 19:27:50.271806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 19:27:50.272101       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 19:27:50.272540       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0307 19:27:51.186316       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 19:27:51.187595       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0307 19:27:51.224643       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0307 19:27:51.758060       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [62592ab3b7ee4a34a9b86dd8976c4e652b42ee4fda964df6ee208432a8e5da1f] <==
	I0307 19:30:45.561060       1 serving.go:331] Generated self-signed cert in-memory
	W0307 19:30:51.674714       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0307 19:30:51.674945       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 19:30:51.675031       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0307 19:30:51.675123       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0307 19:30:51.837457       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0307 19:30:51.837568       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 19:30:51.837577       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 19:30:51.837589       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0307 19:30:52.046861       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 07 19:35:02 old-k8s-version-490121 kubelet[662]: E0307 19:35:02.405853     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 19:35:12 old-k8s-version-490121 kubelet[662]: I0307 19:35:12.403977     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631
	Mar 07 19:35:12 old-k8s-version-490121 kubelet[662]: E0307 19:35:12.405265     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	Mar 07 19:35:17 old-k8s-version-490121 kubelet[662]: E0307 19:35:17.404385     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 19:35:27 old-k8s-version-490121 kubelet[662]: I0307 19:35:27.406058     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631
	Mar 07 19:35:27 old-k8s-version-490121 kubelet[662]: E0307 19:35:27.406835     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	Mar 07 19:35:32 old-k8s-version-490121 kubelet[662]: E0307 19:35:32.407052     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 19:35:40 old-k8s-version-490121 kubelet[662]: I0307 19:35:40.403721     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631
	Mar 07 19:35:40 old-k8s-version-490121 kubelet[662]: E0307 19:35:40.404052     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	Mar 07 19:35:45 old-k8s-version-490121 kubelet[662]: E0307 19:35:45.404603     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 19:35:52 old-k8s-version-490121 kubelet[662]: I0307 19:35:52.404374     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631
	Mar 07 19:35:52 old-k8s-version-490121 kubelet[662]: E0307 19:35:52.404806     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	Mar 07 19:35:56 old-k8s-version-490121 kubelet[662]: E0307 19:35:56.408243     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: I0307 19:36:05.403813     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631
	Mar 07 19:36:05 old-k8s-version-490121 kubelet[662]: E0307 19:36:05.404245     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	Mar 07 19:36:09 old-k8s-version-490121 kubelet[662]: E0307 19:36:09.404371     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 19:36:19 old-k8s-version-490121 kubelet[662]: I0307 19:36:19.403785     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631
	Mar 07 19:36:19 old-k8s-version-490121 kubelet[662]: E0307 19:36:19.404863     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	Mar 07 19:36:20 old-k8s-version-490121 kubelet[662]: E0307 19:36:20.420688     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 19:36:33 old-k8s-version-490121 kubelet[662]: I0307 19:36:33.403706     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: a6084cdde944fbe505bcb05e3e41efece4c60e62141bca51f992767826e4f631
	Mar 07 19:36:33 old-k8s-version-490121 kubelet[662]: E0307 19:36:33.404056     662 pod_workers.go:191] Error syncing pod 56c3d27a-28f2-422f-83ab-3e31ea3b9a1a ("dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w2dn4_kubernetes-dashboard(56c3d27a-28f2-422f-83ab-3e31ea3b9a1a)"
	Mar 07 19:36:35 old-k8s-version-490121 kubelet[662]: E0307 19:36:35.424322     662 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Mar 07 19:36:35 old-k8s-version-490121 kubelet[662]: E0307 19:36:35.424816     662 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Mar 07 19:36:35 old-k8s-version-490121 kubelet[662]: E0307 19:36:35.425021     662 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-t79rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-w9hn6_kube-system(d5b67e4
8-8737-4f11-b097-27b0b334760a): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Mar 07 19:36:35 old-k8s-version-490121 kubelet[662]: E0307 19:36:35.425200     662 pod_workers.go:191] Error syncing pod d5b67e48-8737-4f11-b097-27b0b334760a ("metrics-server-9975d5f86-w9hn6_kube-system(d5b67e48-8737-4f11-b097-27b0b334760a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [90801d79bb21b5205d18bb8243bac76efccf3c63be39fd6bac993c3caa03646a] <==
	2024/03/07 19:31:15 Starting overwatch
	2024/03/07 19:31:15 Using namespace: kubernetes-dashboard
	2024/03/07 19:31:15 Using in-cluster config to connect to apiserver
	2024/03/07 19:31:15 Using secret token for csrf signing
	2024/03/07 19:31:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/07 19:31:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/07 19:31:16 Successful initial request to the apiserver, version: v1.20.0
	2024/03/07 19:31:16 Generating JWE encryption key
	2024/03/07 19:31:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/07 19:31:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/07 19:31:16 Initializing JWE encryption key from synchronized object
	2024/03/07 19:31:16 Creating in-cluster Sidecar client
	2024/03/07 19:31:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 19:31:16 Serving insecurely on HTTP port: 9090
	2024/03/07 19:31:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 19:32:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 19:32:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 19:33:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 19:33:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 19:34:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 19:34:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 19:35:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 19:35:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 19:36:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [ef7f1755441a0413b4ca24ee824d60b422df9895f6bc51f9b8a913469a14b6aa] <==
	I0307 19:31:41.512894       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 19:31:41.525281       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 19:31:41.525338       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 19:31:59.017940       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 19:31:59.018186       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c16ee077-99f9-4aff-9360-fb9b72970221", APIVersion:"v1", ResourceVersion:"855", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-490121_715f41a3-332d-4ebc-bf5f-87f98011376c became leader
	I0307 19:31:59.018425       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-490121_715f41a3-332d-4ebc-bf5f-87f98011376c!
	I0307 19:31:59.119347       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-490121_715f41a3-332d-4ebc-bf5f-87f98011376c!
	
	
	==> storage-provisioner [f1273e5fc5a033931d8ba135a0b1fed3cafedc1de1aef7baaffa2a616215e093] <==
	I0307 19:30:55.567087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0307 19:31:25.591278       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-490121 -n old-k8s-version-490121
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-490121 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-w9hn6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-490121 describe pod metrics-server-9975d5f86-w9hn6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-490121 describe pod metrics-server-9975d5f86-w9hn6: exit status 1 (104.722947ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-w9hn6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-490121 describe pod metrics-server-9975d5f86-w9hn6: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (371.82s)

                                                
                                    

Test pass (297/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.84
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 13.58
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.22
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 10.4
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.22
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
36 TestAddons/Setup 121.58
38 TestAddons/parallel/Registry 16.26
40 TestAddons/parallel/InspektorGadget 10.94
41 TestAddons/parallel/MetricsServer 6.83
44 TestAddons/parallel/CSI 69.61
45 TestAddons/parallel/Headlamp 11.54
46 TestAddons/parallel/CloudSpanner 6.63
47 TestAddons/parallel/LocalPath 51.37
48 TestAddons/parallel/NvidiaDevicePlugin 5.6
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.17
53 TestAddons/StoppedEnableDisable 12.26
54 TestCertOptions 34.03
55 TestCertExpiration 228.29
57 TestForceSystemdFlag 38.92
58 TestForceSystemdEnv 39.88
59 TestDockerEnvContainerd 46
64 TestErrorSpam/setup 31.72
65 TestErrorSpam/start 0.76
66 TestErrorSpam/status 0.99
67 TestErrorSpam/pause 1.66
68 TestErrorSpam/unpause 1.78
69 TestErrorSpam/stop 1.45
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 59.19
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.86
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.12
81 TestFunctional/serial/CacheCmd/cache/add_local 1.47
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.05
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
89 TestFunctional/serial/ExtraConfig 42.14
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.73
92 TestFunctional/serial/LogsFileCmd 1.68
93 TestFunctional/serial/InvalidService 4.83
95 TestFunctional/parallel/ConfigCmd 0.57
96 TestFunctional/parallel/DashboardCmd 10.13
97 TestFunctional/parallel/DryRun 0.53
98 TestFunctional/parallel/InternationalLanguage 0.24
99 TestFunctional/parallel/StatusCmd 1.1
103 TestFunctional/parallel/ServiceCmdConnect 10.65
104 TestFunctional/parallel/AddonsCmd 0.24
105 TestFunctional/parallel/PersistentVolumeClaim 24.01
107 TestFunctional/parallel/SSHCmd 0.63
108 TestFunctional/parallel/CpCmd 2.41
110 TestFunctional/parallel/FileSync 0.33
111 TestFunctional/parallel/CertSync 2.18
115 TestFunctional/parallel/NodeLabels 0.21
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
119 TestFunctional/parallel/License 0.4
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.5
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
133 TestFunctional/parallel/ProfileCmd/profile_list 0.41
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
135 TestFunctional/parallel/ServiceCmd/List 0.66
136 TestFunctional/parallel/MountCmd/any-port 7.56
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
139 TestFunctional/parallel/ServiceCmd/Format 0.49
140 TestFunctional/parallel/ServiceCmd/URL 0.48
141 TestFunctional/parallel/MountCmd/specific-port 1.34
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.94
143 TestFunctional/parallel/Version/short 0.07
144 TestFunctional/parallel/Version/components 1.39
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.64
150 TestFunctional/parallel/ImageCommands/Setup 2.22
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.66
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestMutliControlPlane/serial/StartCluster 119.44
168 TestMutliControlPlane/serial/DeployApp 41.57
169 TestMutliControlPlane/serial/PingHostFromPods 1.71
170 TestMutliControlPlane/serial/AddWorkerNode 25.63
171 TestMutliControlPlane/serial/NodeLabels 0.12
172 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.82
173 TestMutliControlPlane/serial/CopyFile 19.78
174 TestMutliControlPlane/serial/StopSecondaryNode 12.85
175 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.55
176 TestMutliControlPlane/serial/RestartSecondaryNode 18.79
177 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
178 TestMutliControlPlane/serial/RestartClusterKeepsNodes 118.4
179 TestMutliControlPlane/serial/DeleteSecondaryNode 11.33
180 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
181 TestMutliControlPlane/serial/StopCluster 35.91
182 TestMutliControlPlane/serial/RestartCluster 69.99
183 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.63
184 TestMutliControlPlane/serial/AddSecondaryNode 42.86
185 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.8
189 TestJSONOutput/start/Command 55.62
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.77
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.69
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.76
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 42.81
215 TestKicCustomNetwork/use_default_bridge_network 33.96
216 TestKicExistingNetwork 33.85
217 TestKicCustomSubnet 35.01
218 TestKicStaticIP 37.85
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 70.2
223 TestMountStart/serial/StartWithMountFirst 6.26
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 6.47
226 TestMountStart/serial/VerifyMountSecond 0.27
227 TestMountStart/serial/DeleteFirst 1.61
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.21
230 TestMountStart/serial/RestartStopped 7.38
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 75.52
235 TestMultiNode/serial/DeployApp2Nodes 5.84
236 TestMultiNode/serial/PingHostFrom2Pods 1.06
237 TestMultiNode/serial/AddNode 19.07
238 TestMultiNode/serial/MultiNodeLabels 0.09
239 TestMultiNode/serial/ProfileList 0.36
240 TestMultiNode/serial/CopyFile 10.33
241 TestMultiNode/serial/StopNode 2.24
242 TestMultiNode/serial/StartAfterStop 9.47
243 TestMultiNode/serial/RestartKeepsNodes 80.04
244 TestMultiNode/serial/DeleteNode 5.44
245 TestMultiNode/serial/StopMultiNode 24.02
246 TestMultiNode/serial/RestartMultiNode 53.88
247 TestMultiNode/serial/ValidateNameConflict 37.79
252 TestPreload 112.79
254 TestScheduledStopUnix 105.43
257 TestInsufficientStorage 11.67
258 TestRunningBinaryUpgrade 89.85
260 TestKubernetesUpgrade 383.8
261 TestMissingContainerUpgrade 171.95
263 TestPause/serial/Start 72.31
264 TestPause/serial/SecondStartNoReconfiguration 6.93
265 TestPause/serial/Pause 1.06
266 TestPause/serial/VerifyStatus 0.43
267 TestPause/serial/Unpause 0.78
268 TestPause/serial/PauseAgain 1.17
269 TestPause/serial/DeletePaused 2.76
270 TestPause/serial/VerifyDeletedResources 2.97
271 TestStoppedBinaryUpgrade/Setup 1.08
272 TestStoppedBinaryUpgrade/Upgrade 128.41
273 TestStoppedBinaryUpgrade/MinikubeLogs 1.68
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
283 TestNoKubernetes/serial/StartWithK8s 43.39
291 TestNetworkPlugins/group/false 4.66
292 TestNoKubernetes/serial/StartWithStopK8s 17.71
296 TestNoKubernetes/serial/Start 9.03
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.41
298 TestNoKubernetes/serial/ProfileList 1.15
299 TestNoKubernetes/serial/Stop 1.25
300 TestNoKubernetes/serial/StartNoArgs 7.79
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.46
303 TestStartStop/group/old-k8s-version/serial/FirstStart 174.68
305 TestStartStop/group/no-preload/serial/FirstStart 69.94
306 TestStartStop/group/old-k8s-version/serial/DeployApp 9.65
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.13
308 TestStartStop/group/old-k8s-version/serial/Stop 12.08
309 TestStartStop/group/no-preload/serial/DeployApp 8.39
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.42
313 TestStartStop/group/no-preload/serial/Stop 12.26
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.35
315 TestStartStop/group/no-preload/serial/SecondStart 268.81
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.12
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
319 TestStartStop/group/no-preload/serial/Pause 3.13
321 TestStartStop/group/embed-certs/serial/FirstStart 66.38
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/embed-certs/serial/DeployApp 8.35
324 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
325 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
326 TestStartStop/group/embed-certs/serial/Stop 13.37
327 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.34
328 TestStartStop/group/old-k8s-version/serial/Pause 2.81
330 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 66.15
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
332 TestStartStop/group/embed-certs/serial/SecondStart 294.05
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.38
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.01
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 304.14
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
341 TestStartStop/group/embed-certs/serial/Pause 3.16
343 TestStartStop/group/newest-cni/serial/FirstStart 45.75
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
346 TestStartStop/group/newest-cni/serial/Stop 1.47
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
348 TestStartStop/group/newest-cni/serial/SecondStart 15.77
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
352 TestStartStop/group/newest-cni/serial/Pause 3.02
353 TestNetworkPlugins/group/auto/Start 63.14
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
355 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
356 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
357 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.01
358 TestNetworkPlugins/group/kindnet/Start 61.57
359 TestNetworkPlugins/group/auto/KubeletFlags 0.35
360 TestNetworkPlugins/group/auto/NetCatPod 9.31
361 TestNetworkPlugins/group/auto/DNS 0.18
362 TestNetworkPlugins/group/auto/Localhost 0.2
363 TestNetworkPlugins/group/auto/HairPin 0.19
364 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
365 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
366 TestNetworkPlugins/group/kindnet/NetCatPod 10.38
367 TestNetworkPlugins/group/calico/Start 80.83
368 TestNetworkPlugins/group/kindnet/DNS 0.26
369 TestNetworkPlugins/group/kindnet/Localhost 0.36
370 TestNetworkPlugins/group/kindnet/HairPin 0.32
371 TestNetworkPlugins/group/custom-flannel/Start 67.49
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.31
374 TestNetworkPlugins/group/calico/NetCatPod 10.27
375 TestNetworkPlugins/group/calico/DNS 0.27
376 TestNetworkPlugins/group/calico/Localhost 0.19
377 TestNetworkPlugins/group/calico/HairPin 0.18
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.3
380 TestNetworkPlugins/group/custom-flannel/DNS 0.25
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
383 TestNetworkPlugins/group/enable-default-cni/Start 89.86
384 TestNetworkPlugins/group/flannel/Start 60.5
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
387 TestNetworkPlugins/group/flannel/NetCatPod 8.26
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.33
390 TestNetworkPlugins/group/flannel/DNS 0.22
391 TestNetworkPlugins/group/flannel/Localhost 0.17
392 TestNetworkPlugins/group/flannel/HairPin 0.19
393 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
394 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
395 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
396 TestNetworkPlugins/group/bridge/Start 45.84
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
398 TestNetworkPlugins/group/bridge/NetCatPod 9.25
399 TestNetworkPlugins/group/bridge/DNS 0.18
400 TestNetworkPlugins/group/bridge/Localhost 0.16
401 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (12.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-584237 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-584237 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.83689035s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-584237
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-584237: exit status 85 (78.165504ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-584237 | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC |          |
	|         | -p download-only-584237        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 18:43:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:43:21.076875  563587 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:43:21.077042  563587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:43:21.077054  563587 out.go:304] Setting ErrFile to fd 2...
	I0307 18:43:21.077059  563587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:43:21.077306  563587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	W0307 18:43:21.077446  563587 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18239-558171/.minikube/config/config.json: open /home/jenkins/minikube-integration/18239-558171/.minikube/config/config.json: no such file or directory
	I0307 18:43:21.077889  563587 out.go:298] Setting JSON to true
	I0307 18:43:21.078741  563587 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8745,"bootTime":1709828256,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 18:43:21.078816  563587 start.go:139] virtualization:  
	I0307 18:43:21.082333  563587 out.go:97] [download-only-584237] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 18:43:21.084591  563587 out.go:169] MINIKUBE_LOCATION=18239
	W0307 18:43:21.082582  563587 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 18:43:21.082631  563587 notify.go:220] Checking for updates...
	I0307 18:43:21.088604  563587 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:43:21.090636  563587 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 18:43:21.092633  563587 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	I0307 18:43:21.094867  563587 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0307 18:43:21.099183  563587 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 18:43:21.099559  563587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:43:21.121313  563587 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 18:43:21.121419  563587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:43:21.189727  563587 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 18:43:21.179929709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:43:21.189834  563587 docker.go:295] overlay module found
	I0307 18:43:21.192053  563587 out.go:97] Using the docker driver based on user configuration
	I0307 18:43:21.192076  563587 start.go:297] selected driver: docker
	I0307 18:43:21.192083  563587 start.go:901] validating driver "docker" against <nil>
	I0307 18:43:21.192195  563587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:43:21.248011  563587 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 18:43:21.238781375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:43:21.248183  563587 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 18:43:21.248471  563587 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0307 18:43:21.248629  563587 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 18:43:21.251236  563587 out.go:169] Using Docker driver with root privileges
	I0307 18:43:21.253280  563587 cni.go:84] Creating CNI manager for ""
	I0307 18:43:21.253313  563587 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 18:43:21.253328  563587 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 18:43:21.253439  563587 start.go:340] cluster config:
	{Name:download-only-584237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-584237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:43:21.256506  563587 out.go:97] Starting "download-only-584237" primary control-plane node in "download-only-584237" cluster
	I0307 18:43:21.256541  563587 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 18:43:21.259209  563587 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 18:43:21.259243  563587 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 18:43:21.259412  563587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 18:43:21.274475  563587 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 18:43:21.275189  563587 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 18:43:21.275292  563587 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 18:43:21.355988  563587 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0307 18:43:21.356025  563587 cache.go:56] Caching tarball of preloaded images
	I0307 18:43:21.356194  563587 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 18:43:21.358589  563587 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0307 18:43:21.358617  563587 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0307 18:43:21.478201  563587 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0307 18:43:29.079817  563587 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 18:43:29.435751  563587 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0307 18:43:29.435861  563587 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0307 18:43:30.523856  563587 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0307 18:43:30.524230  563587 profile.go:142] Saving config to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/download-only-584237/config.json ...
	I0307 18:43:30.524263  563587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/download-only-584237/config.json: {Name:mk887573399c808abd7d2217925800c08abc270d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:43:30.524462  563587 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 18:43:30.525448  563587 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18239-558171/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-584237 host does not exist
	  To start a cluster, run: "minikube start -p download-only-584237"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-584237
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (13.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-992905 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-992905 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.578640061s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (13.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-992905
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-992905: exit status 85 (79.016215ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-584237 | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC |                     |
	|         | -p download-only-584237        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| delete  | -p download-only-584237        | download-only-584237 | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| start   | -o=json --download-only        | download-only-992905 | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC |                     |
	|         | -p download-only-992905        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 18:43:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:43:34.326363  563746 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:43:34.326521  563746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:43:34.326532  563746 out.go:304] Setting ErrFile to fd 2...
	I0307 18:43:34.326537  563746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:43:34.326779  563746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 18:43:34.327163  563746 out.go:298] Setting JSON to true
	I0307 18:43:34.328023  563746 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8758,"bootTime":1709828256,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 18:43:34.328087  563746 start.go:139] virtualization:  
	I0307 18:43:34.331135  563746 out.go:97] [download-only-992905] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 18:43:34.333487  563746 out.go:169] MINIKUBE_LOCATION=18239
	I0307 18:43:34.331374  563746 notify.go:220] Checking for updates...
	I0307 18:43:34.336251  563746 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:43:34.338903  563746 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 18:43:34.341304  563746 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	I0307 18:43:34.343170  563746 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0307 18:43:34.348268  563746 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 18:43:34.348539  563746 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:43:34.368838  563746 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 18:43:34.368941  563746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:43:34.434004  563746 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 18:43:34.424661813 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:43:34.434161  563746 docker.go:295] overlay module found
	I0307 18:43:34.436352  563746 out.go:97] Using the docker driver based on user configuration
	I0307 18:43:34.436389  563746 start.go:297] selected driver: docker
	I0307 18:43:34.436396  563746 start.go:901] validating driver "docker" against <nil>
	I0307 18:43:34.436512  563746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:43:34.491443  563746 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 18:43:34.482048409 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:43:34.491619  563746 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 18:43:34.491922  563746 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0307 18:43:34.492119  563746 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 18:43:34.494862  563746 out.go:169] Using Docker driver with root privileges
	I0307 18:43:34.496961  563746 cni.go:84] Creating CNI manager for ""
	I0307 18:43:34.496978  563746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 18:43:34.496991  563746 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 18:43:34.497069  563746 start.go:340] cluster config:
	{Name:download-only-992905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-992905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:43:34.499567  563746 out.go:97] Starting "download-only-992905" primary control-plane node in "download-only-992905" cluster
	I0307 18:43:34.499586  563746 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 18:43:34.501949  563746 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 18:43:34.501976  563746 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 18:43:34.502080  563746 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 18:43:34.517807  563746 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 18:43:34.517920  563746 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 18:43:34.517946  563746 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 18:43:34.517951  563746 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 18:43:34.517960  563746 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 18:43:34.597918  563746 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0307 18:43:34.597948  563746 cache.go:56] Caching tarball of preloaded images
	I0307 18:43:34.599671  563746 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 18:43:34.602458  563746 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0307 18:43:34.602489  563746 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0307 18:43:34.729650  563746 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0307 18:43:43.406844  563746 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0307 18:43:43.407003  563746 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0307 18:43:44.315327  563746 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0307 18:43:44.315727  563746 profile.go:142] Saving config to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/download-only-992905/config.json ...
	I0307 18:43:44.315760  563746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/download-only-992905/config.json: {Name:mk33c4610890ac8992289106c3bec58a408d7e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:43:44.315940  563746 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 18:43:44.316794  563746 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18239-558171/.minikube/cache/linux/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-992905 host does not exist
	  To start a cluster, run: "minikube start -p download-only-992905"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-992905
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (10.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-843119 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-843119 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.394782067s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (10.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-843119
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-843119: exit status 85 (82.881973ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-584237 | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC |                     |
	|         | -p download-only-584237           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| delete  | -p download-only-584237           | download-only-584237 | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| start   | -o=json --download-only           | download-only-992905 | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC |                     |
	|         | -p download-only-992905           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| delete  | -p download-only-992905           | download-only-992905 | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC | 07 Mar 24 18:43 UTC |
	| start   | -o=json --download-only           | download-only-843119 | jenkins | v1.32.0 | 07 Mar 24 18:43 UTC |                     |
	|         | -p download-only-843119           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 18:43:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:43:48.344910  563912 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:43:48.345065  563912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:43:48.345076  563912 out.go:304] Setting ErrFile to fd 2...
	I0307 18:43:48.345081  563912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:43:48.345326  563912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 18:43:48.345784  563912 out.go:298] Setting JSON to true
	I0307 18:43:48.346679  563912 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8772,"bootTime":1709828256,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 18:43:48.346747  563912 start.go:139] virtualization:  
	I0307 18:43:48.349319  563912 out.go:97] [download-only-843119] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 18:43:48.351458  563912 out.go:169] MINIKUBE_LOCATION=18239
	I0307 18:43:48.349550  563912 notify.go:220] Checking for updates...
	I0307 18:43:48.355651  563912 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:43:48.357786  563912 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 18:43:48.359766  563912 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	I0307 18:43:48.362079  563912 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0307 18:43:48.366102  563912 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 18:43:48.366432  563912 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:43:48.387452  563912 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 18:43:48.387581  563912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:43:48.447223  563912 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 18:43:48.437710387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:43:48.447333  563912 docker.go:295] overlay module found
	I0307 18:43:48.449251  563912 out.go:97] Using the docker driver based on user configuration
	I0307 18:43:48.449288  563912 start.go:297] selected driver: docker
	I0307 18:43:48.449296  563912 start.go:901] validating driver "docker" against <nil>
	I0307 18:43:48.449414  563912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:43:48.513726  563912 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 18:43:48.505076497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:43:48.513899  563912 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 18:43:48.514182  563912 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0307 18:43:48.514339  563912 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 18:43:48.516611  563912 out.go:169] Using Docker driver with root privileges
	I0307 18:43:48.518606  563912 cni.go:84] Creating CNI manager for ""
	I0307 18:43:48.518624  563912 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 18:43:48.518639  563912 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 18:43:48.518723  563912 start.go:340] cluster config:
	{Name:download-only-843119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-843119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0307 18:43:48.520993  563912 out.go:97] Starting "download-only-843119" primary control-plane node in "download-only-843119" cluster
	I0307 18:43:48.521013  563912 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 18:43:48.523055  563912 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 18:43:48.523086  563912 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0307 18:43:48.523277  563912 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 18:43:48.537788  563912 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 18:43:48.537918  563912 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 18:43:48.537943  563912 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 18:43:48.537952  563912 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 18:43:48.537960  563912 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 18:43:48.586208  563912 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0307 18:43:48.586233  563912 cache.go:56] Caching tarball of preloaded images
	I0307 18:43:48.586726  563912 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0307 18:43:48.588982  563912 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0307 18:43:48.589001  563912 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0307 18:43:48.696824  563912 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:adc883bf092a67b4673b5b5787f99b2f -> /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0307 18:43:54.087753  563912 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0307 18:43:54.087922  563912 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18239-558171/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0307 18:43:55.032341  563912 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on containerd
	I0307 18:43:55.032729  563912 profile.go:142] Saving config to /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/download-only-843119/config.json ...
	I0307 18:43:55.032765  563912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/download-only-843119/config.json: {Name:mk76f9091277e6f6f52e5deab53a70611244d994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:43:55.032961  563912 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0307 18:43:55.033137  563912 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18239-558171/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-843119 host does not exist
	  To start a cluster, run: "minikube start -p download-only-843119"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-843119
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-512824 --alsologtostderr --binary-mirror http://127.0.0.1:36839 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-512824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-512824
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-678595
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-678595: exit status 85 (80.894003ms)

                                                
                                                
-- stdout --
	* Profile "addons-678595" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-678595"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-678595
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-678595: exit status 85 (97.149772ms)

                                                
                                                
-- stdout --
	* Profile "addons-678595" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-678595"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (121.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-678595 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-678595 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m1.581194819s)
--- PASS: TestAddons/Setup (121.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 66.626287ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-dv5nj" [d4a278a3-fe04-4ecf-959a-50457864474e] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005793677s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-crp5d" [c9166acd-180e-4f89-aa07-bdab8898c012] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005105548s
addons_test.go:340: (dbg) Run:  kubectl --context addons-678595 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-678595 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-678595 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.036431679s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 ip
2024/03/07 18:46:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7zdfn" [34909a78-c486-4e1c-911e-762165bc1403] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005068494s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-678595
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-678595: (5.932062657s)
--- PASS: TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.954652ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-hs8l8" [06c1f01e-1aed-470e-a746-3a378cb00b9b] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005129496s
addons_test.go:415: (dbg) Run:  kubectl --context addons-678595 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 75.209037ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-678595 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-678595 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [78928e3e-5a15-40a8-9341-2adc38d874a9] Pending
helpers_test.go:344: "task-pv-pod" [78928e3e-5a15-40a8-9341-2adc38d874a9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [78928e3e-5a15-40a8-9341-2adc38d874a9] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003786283s
addons_test.go:584: (dbg) Run:  kubectl --context addons-678595 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-678595 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-678595 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-678595 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-678595 delete pod task-pv-pod: (1.344209419s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-678595 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-678595 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-678595 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [470b5227-f0b1-4cc0-a663-79aa06d3cc2e] Pending
helpers_test.go:344: "task-pv-pod-restore" [470b5227-f0b1-4cc0-a663-79aa06d3cc2e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [470b5227-f0b1-4cc0-a663-79aa06d3cc2e] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004067514s
addons_test.go:626: (dbg) Run:  kubectl --context addons-678595 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-678595 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-678595 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-678595 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.936938616s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (69.61s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-678595 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-678595 --alsologtostderr -v=1: (1.531372336s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-b5ls7" [2b59594d-6753-47d4-bfe0-331ee95eba9d] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-b5ls7" [2b59594d-6753-47d4-bfe0-331ee95eba9d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-b5ls7" [2b59594d-6753-47d4-bfe0-331ee95eba9d] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003890782s
--- PASS: TestAddons/parallel/Headlamp (11.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-k64xn" [4b9257bc-733d-4f0d-8a25-881a92936d34] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.006402699s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-678595
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.37s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-678595 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-678595 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-678595 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cf92566e-7fe2-4505-ab85-5742ba9911a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cf92566e-7fe2-4505-ab85-5742ba9911a2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cf92566e-7fe2-4505-ab85-5742ba9911a2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004059169s
addons_test.go:891: (dbg) Run:  kubectl --context addons-678595 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 ssh "cat /opt/local-path-provisioner/pvc-efe16087-3c81-403c-ab7c-a65d837391a7_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-678595 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-678595 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-678595 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-678595 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.286615414s)
--- PASS: TestAddons/parallel/LocalPath (51.37s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pm597" [cd04a706-2c4c-4b67-b86d-138b482338ac] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004764274s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-678595
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-25ljl" [f0eab977-ff7e-49a8-96bc-a78ef977309c] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004252081s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-678595 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-678595 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-678595
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-678595: (11.967438382s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-678595
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-678595
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-678595
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (34.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-533978 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-533978 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (31.379610675s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-533978 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-533978 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-533978 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-533978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-533978
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-533978: (1.98671573s)
--- PASS: TestCertOptions (34.03s)

                                                
                                    
x
+
TestCertExpiration (228.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-643500 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-643500 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.602431937s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-643500 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-643500 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.307746572s)
helpers_test.go:175: Cleaning up "cert-expiration-643500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-643500
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-643500: (2.375261418s)
--- PASS: TestCertExpiration (228.29s)

                                                
                                    
x
+
TestForceSystemdFlag (38.92s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-093673 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0307 19:26:02.631357  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-093673 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.683633346s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-093673 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-093673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-093673
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-093673: (1.94978042s)
--- PASS: TestForceSystemdFlag (38.92s)

                                                
                                    
x
+
TestForceSystemdEnv (39.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-920912 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0307 19:24:05.681262  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-920912 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.087714059s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-920912 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-920912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-920912
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-920912: (2.418974845s)
--- PASS: TestForceSystemdEnv (39.88s)

                                                
                                    
x
+
TestDockerEnvContainerd (46s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-694138 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-694138 --driver=docker  --container-runtime=containerd: (29.92217388s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-694138"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-694138": (1.289527573s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Mk7lbdv8OOwL/agent.580687" SSH_AGENT_PID="580688" DOCKER_HOST=ssh://docker@127.0.0.1:33518 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Mk7lbdv8OOwL/agent.580687" SSH_AGENT_PID="580688" DOCKER_HOST=ssh://docker@127.0.0.1:33518 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Mk7lbdv8OOwL/agent.580687" SSH_AGENT_PID="580688" DOCKER_HOST=ssh://docker@127.0.0.1:33518 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.364868432s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Mk7lbdv8OOwL/agent.580687" SSH_AGENT_PID="580688" DOCKER_HOST=ssh://docker@127.0.0.1:33518 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-694138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-694138
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-694138: (1.966159366s)
--- PASS: TestDockerEnvContainerd (46.00s)

                                                
                                    
x
+
TestErrorSpam/setup (31.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-525736 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-525736 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-525736 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-525736 --driver=docker  --container-runtime=containerd: (31.719722618s)
--- PASS: TestErrorSpam/setup (31.72s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 stop: (1.234784715s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-525736 --log_dir /tmp/nospam-525736 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18239-558171/.minikube/files/etc/test/nested/copy/563581/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788559 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0307 18:51:02.633449  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 18:51:02.641082  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 18:51:02.651326  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 18:51:02.671764  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 18:51:02.712093  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 18:51:02.792381  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 18:51:02.952829  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 18:51:03.273465  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 18:51:03.914327  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 18:51:05.194572  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 18:51:07.755218  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-788559 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (59.181917585s)
--- PASS: TestFunctional/serial/StartWithProxy (59.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788559 --alsologtostderr -v=8
E0307 18:51:12.875700  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-788559 --alsologtostderr -v=8: (5.858241129s)
functional_test.go:659: soft start took 5.862094154s for "functional-788559" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-788559 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 cache add registry.k8s.io/pause:3.1: (1.485375332s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 cache add registry.k8s.io/pause:3.3: (1.332614994s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 cache add registry.k8s.io/pause:latest: (1.301700057s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-788559 /tmp/TestFunctionalserialCacheCmdcacheadd_local2563275493/001
E0307 18:51:23.116031  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 cache add minikube-local-cache-test:functional-788559
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 cache delete minikube-local-cache-test:functional-788559
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-788559
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788559 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (303.241984ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 cache reload: (1.124781456s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 kubectl -- --context functional-788559 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-788559 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788559 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0307 18:51:43.596866  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-788559 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.140244173s)
functional_test.go:757: restart took 42.140354811s for "functional-788559" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.14s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-788559 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 logs: (1.725006414s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 logs --file /tmp/TestFunctionalserialLogsFileCmd1232160533/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 logs --file /tmp/TestFunctionalserialLogsFileCmd1232160533/001/logs.txt: (1.677861331s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.83s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-788559 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-788559
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-788559: exit status 115 (675.101612ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30170 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-788559 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.83s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788559 config get cpus: exit status 14 (98.500131ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788559 config get cpus: exit status 14 (107.539306ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-788559 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-788559 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 594856: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-788559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (281.203101ms)

                                                
                                                
-- stdout --
	* [functional-788559] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18239
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:52:49.602817  594455 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:52:49.609757  594455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:52:49.609768  594455 out.go:304] Setting ErrFile to fd 2...
	I0307 18:52:49.609774  594455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:52:49.610047  594455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 18:52:49.610591  594455 out.go:298] Setting JSON to false
	I0307 18:52:49.611769  594455 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9314,"bootTime":1709828256,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 18:52:49.611848  594455 start.go:139] virtualization:  
	I0307 18:52:49.615408  594455 out.go:177] * [functional-788559] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 18:52:49.618786  594455 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 18:52:49.621183  594455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:52:49.618975  594455 notify.go:220] Checking for updates...
	I0307 18:52:49.626580  594455 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 18:52:49.629248  594455 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	I0307 18:52:49.631884  594455 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 18:52:49.635940  594455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:52:49.639672  594455 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 18:52:49.640242  594455 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:52:49.666923  594455 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 18:52:49.667046  594455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:52:49.746535  594455 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-07 18:52:49.734029974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:52:49.746645  594455 docker.go:295] overlay module found
	I0307 18:52:49.749273  594455 out.go:177] * Using the docker driver based on existing profile
	I0307 18:52:49.751423  594455 start.go:297] selected driver: docker
	I0307 18:52:49.751443  594455 start.go:901] validating driver "docker" against &{Name:functional-788559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-788559 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:52:49.751592  594455 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:52:49.754312  594455 out.go:177] 
	W0307 18:52:49.756218  594455 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0307 18:52:49.758477  594455 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788559 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-788559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (243.778574ms)

                                                
                                                
-- stdout --
	* [functional-788559] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18239
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:52:49.295359  594412 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:52:49.295546  594412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:52:49.295566  594412 out.go:304] Setting ErrFile to fd 2...
	I0307 18:52:49.295573  594412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:52:49.295963  594412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 18:52:49.296505  594412 out.go:298] Setting JSON to false
	I0307 18:52:49.297499  594412 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9313,"bootTime":1709828256,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 18:52:49.297596  594412 start.go:139] virtualization:  
	I0307 18:52:49.300899  594412 out.go:177] * [functional-788559] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0307 18:52:49.303034  594412 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 18:52:49.304851  594412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:52:49.303104  594412 notify.go:220] Checking for updates...
	I0307 18:52:49.307467  594412 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 18:52:49.309688  594412 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	I0307 18:52:49.312363  594412 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 18:52:49.314549  594412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:52:49.316835  594412 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 18:52:49.317508  594412 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:52:49.353110  594412 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 18:52:49.353225  594412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:52:49.463395  594412 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-07 18:52:49.453570096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:52:49.463500  594412 docker.go:295] overlay module found
	I0307 18:52:49.465796  594412 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0307 18:52:49.468049  594412 start.go:297] selected driver: docker
	I0307 18:52:49.468065  594412 start.go:901] validating driver "docker" against &{Name:functional-788559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-788559 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:52:49.468215  594412 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:52:49.471458  594412 out.go:177] 
	W0307 18:52:49.474439  594412 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0307 18:52:49.476554  594412 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-788559 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-788559 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-d9vf4" [e54e16cb-9e87-4c46-b23f-35b7cdf879be] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-d9vf4" [e54e16cb-9e87-4c46-b23f-35b7cdf879be] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003765624s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32201
functional_test.go:1671: http://192.168.49.2:32201: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-d9vf4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32201
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4e7d3f99-0f71-46bc-980f-5797c556fb62] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004601545s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-788559 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-788559 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-788559 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-788559 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [241abcd1-f71d-4ef1-9a4a-a146a94321f0] Pending
helpers_test.go:344: "sp-pod" [241abcd1-f71d-4ef1-9a4a-a146a94321f0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [241abcd1-f71d-4ef1-9a4a-a146a94321f0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004304509s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-788559 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-788559 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-788559 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d5c06e25-252f-498d-9999-b905dfe43bae] Pending
helpers_test.go:344: "sp-pod" [d5c06e25-252f-498d-9999-b905dfe43bae] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004315811s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-788559 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh -n functional-788559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 cp functional-788559:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1400503948/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh -n functional-788559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh -n functional-788559 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/563581/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "sudo cat /etc/test/nested/copy/563581/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/563581.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "sudo cat /etc/ssl/certs/563581.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/563581.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "sudo cat /usr/share/ca-certificates/563581.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/5635812.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "sudo cat /etc/ssl/certs/5635812.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/5635812.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "sudo cat /usr/share/ca-certificates/5635812.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-788559 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788559 ssh "sudo systemctl is-active docker": exit status 1 (377.575817ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788559 ssh "sudo systemctl is-active crio": exit status 1 (377.065765ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-788559 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-788559 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-788559 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 592337: os: process already finished
helpers_test.go:502: unable to terminate pid 592183: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-788559 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-788559 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-788559 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [08fcbf89-4325-4027-87b5-8eaaaa4268b3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [08fcbf89-4325-4027-87b5-8eaaaa4268b3] Running
E0307 18:52:24.557049  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004428066s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-788559 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.66.5 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-788559 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-788559 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-788559 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-pl9jh" [ade3dc7e-a34d-4a42-98f7-aee5a0df160c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-pl9jh" [ade3dc7e-a34d-4a42-98f7-aee5a0df160c] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004581019s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "308.155076ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "96.924201ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "431.417513ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "83.242295ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-788559 /tmp/TestFunctionalparallelMountCmdany-port1376579600/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709837565599437149" to /tmp/TestFunctionalparallelMountCmdany-port1376579600/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709837565599437149" to /tmp/TestFunctionalparallelMountCmdany-port1376579600/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709837565599437149" to /tmp/TestFunctionalparallelMountCmdany-port1376579600/001/test-1709837565599437149
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788559 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (485.191512ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  7 18:52 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  7 18:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  7 18:52 test-1709837565599437149
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh cat /mount-9p/test-1709837565599437149
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-788559 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e851b49a-6841-4006-b2d8-7fdd7c0d8973] Pending
helpers_test.go:344: "busybox-mount" [e851b49a-6841-4006-b2d8-7fdd7c0d8973] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e851b49a-6841-4006-b2d8-7fdd7c0d8973] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e851b49a-6841-4006-b2d8-7fdd7c0d8973] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004637397s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-788559 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788559 /tmp/TestFunctionalparallelMountCmdany-port1376579600/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 service list -o json
functional_test.go:1490: Took "519.268011ms" to run "out/minikube-linux-arm64 -p functional-788559 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30668
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30668
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-788559 /tmp/TestFunctionalparallelMountCmdspecific-port154075372/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788559 /tmp/TestFunctionalparallelMountCmdspecific-port154075372/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788559 ssh "sudo umount -f /mount-9p": exit status 1 (334.759297ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-788559 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788559 /tmp/TestFunctionalparallelMountCmdspecific-port154075372/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-788559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1258607476/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-788559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1258607476/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-788559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1258607476/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 ssh "findmnt -T" /mount1: (1.127998985s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-788559 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1258607476/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1258607476/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1258607476/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 version -o=json --components: (1.387547726s)
--- PASS: TestFunctional/parallel/Version/components (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-788559 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-788559
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-788559 image ls --format short --alsologtostderr:
I0307 18:53:15.227098  596881 out.go:291] Setting OutFile to fd 1 ...
I0307 18:53:15.227354  596881 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 18:53:15.227383  596881 out.go:304] Setting ErrFile to fd 2...
I0307 18:53:15.227402  596881 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 18:53:15.227694  596881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
I0307 18:53:15.228378  596881 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 18:53:15.228560  596881 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 18:53:15.229092  596881 cli_runner.go:164] Run: docker container inspect functional-788559 --format={{.State.Status}}
I0307 18:53:15.252462  596881 ssh_runner.go:195] Run: systemctl --version
I0307 18:53:15.252513  596881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-788559
I0307 18:53:15.277920  596881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/functional-788559/id_rsa Username:docker}
I0307 18:53:15.370109  596881 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-788559 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/nginx                     | alpine             | sha256:be5e6f | 17.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| docker.io/library/nginx                     | latest             | sha256:760b7c | 67.2MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| docker.io/library/minikube-local-cache-test | functional-788559  | sha256:2eacb2 | 1.01kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-788559 image ls --format table --alsologtostderr:
I0307 18:53:15.563081  596942 out.go:291] Setting OutFile to fd 1 ...
I0307 18:53:15.563226  596942 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 18:53:15.563232  596942 out.go:304] Setting ErrFile to fd 2...
I0307 18:53:15.563237  596942 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 18:53:15.563477  596942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
I0307 18:53:15.564069  596942 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 18:53:15.564184  596942 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 18:53:15.564750  596942 cli_runner.go:164] Run: docker container inspect functional-788559 --format={{.State.Status}}
I0307 18:53:15.594061  596942 ssh_runner.go:195] Run: systemctl --version
I0307 18:53:15.594120  596942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-788559
I0307 18:53:15.612613  596942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/functional-788559/id_rsa Username:docker}
I0307 18:53:15.706114  596942 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-788559 image ls --format json --alsologtostderr:
[{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b1
4f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:2eacb268fe0183feb32469ff53f08cf9bfc7c13f77c38c2f7947629bd5660fef","repoDigests":[],"repoTags":["docker.io/library
/minikube-local-cache-test:functional-788559"],"size":"1007"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:829
e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676","repoDigests":["docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216905"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":["docker.io/library/nginx@sha256:6a2f8b28e45c4a
dea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601423"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-788559 image ls --format json --alsologtostderr:
I0307 18:53:15.503731  596937 out.go:291] Setting OutFile to fd 1 ...
I0307 18:53:15.503860  596937 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 18:53:15.503871  596937 out.go:304] Setting ErrFile to fd 2...
I0307 18:53:15.503875  596937 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 18:53:15.504107  596937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
I0307 18:53:15.504706  596937 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 18:53:15.504829  596937 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 18:53:15.505287  596937 cli_runner.go:164] Run: docker container inspect functional-788559 --format={{.State.Status}}
I0307 18:53:15.535302  596937 ssh_runner.go:195] Run: systemctl --version
I0307 18:53:15.535354  596937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-788559
I0307 18:53:15.556504  596937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/functional-788559/id_rsa Username:docker}
I0307 18:53:15.653670  596937 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-788559 image ls --format yaml --alsologtostderr:
- id: sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests:
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
repoTags:
- docker.io/library/nginx:latest
size: "67216905"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:2eacb268fe0183feb32469ff53f08cf9bfc7c13f77c38c2f7947629bd5660fef
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-788559
size: "1007"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests:
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
repoTags:
- docker.io/library/nginx:alpine
size: "17601423"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-788559 image ls --format yaml --alsologtostderr:
I0307 18:53:15.242266  596882 out.go:291] Setting OutFile to fd 1 ...
I0307 18:53:15.242473  596882 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 18:53:15.242486  596882 out.go:304] Setting ErrFile to fd 2...
I0307 18:53:15.242493  596882 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 18:53:15.242769  596882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
I0307 18:53:15.243393  596882 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 18:53:15.243517  596882 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 18:53:15.244045  596882 cli_runner.go:164] Run: docker container inspect functional-788559 --format={{.State.Status}}
I0307 18:53:15.265377  596882 ssh_runner.go:195] Run: systemctl --version
I0307 18:53:15.265438  596882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-788559
I0307 18:53:15.283863  596882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/functional-788559/id_rsa Username:docker}
I0307 18:53:15.379159  596882 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788559 ssh pgrep buildkitd: exit status 1 (287.809807ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image build -t localhost/my-image:functional-788559 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-788559 image build -t localhost/my-image:functional-788559 testdata/build --alsologtostderr: (2.128631043s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-788559 image build -t localhost/my-image:functional-788559 testdata/build --alsologtostderr:
I0307 18:53:16.056846  597042 out.go:291] Setting OutFile to fd 1 ...
I0307 18:53:16.057420  597042 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 18:53:16.057454  597042 out.go:304] Setting ErrFile to fd 2...
I0307 18:53:16.057479  597042 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 18:53:16.057761  597042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
I0307 18:53:16.058510  597042 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 18:53:16.059128  597042 config.go:182] Loaded profile config "functional-788559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 18:53:16.059724  597042 cli_runner.go:164] Run: docker container inspect functional-788559 --format={{.State.Status}}
I0307 18:53:16.077605  597042 ssh_runner.go:195] Run: systemctl --version
I0307 18:53:16.077657  597042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-788559
I0307 18:53:16.092945  597042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/functional-788559/id_rsa Username:docker}
I0307 18:53:16.182332  597042 build_images.go:151] Building image from path: /tmp/build.3823544109.tar
I0307 18:53:16.182405  597042 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0307 18:53:16.192098  597042 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3823544109.tar
I0307 18:53:16.195653  597042 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3823544109.tar: stat -c "%s %y" /var/lib/minikube/build/build.3823544109.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3823544109.tar': No such file or directory
I0307 18:53:16.195684  597042 ssh_runner.go:362] scp /tmp/build.3823544109.tar --> /var/lib/minikube/build/build.3823544109.tar (3072 bytes)
I0307 18:53:16.220480  597042 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3823544109
I0307 18:53:16.229931  597042 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3823544109 -xf /var/lib/minikube/build/build.3823544109.tar
I0307 18:53:16.239547  597042 containerd.go:379] Building image: /var/lib/minikube/build/build.3823544109
I0307 18:53:16.239640  597042 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3823544109 --local dockerfile=/var/lib/minikube/build/build.3823544109 --output type=image,name=localhost/my-image:functional-788559
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:874820fbaed5d0101866cf3bd969795439f45675c5699fd53979c80a8663c719 0.0s done
#8 exporting config sha256:78d91215a679256a415f1d3e94023375476ff8534dd1b24c5046acad30860be8 0.0s done
#8 naming to localhost/my-image:functional-788559 done
#8 DONE 0.1s
I0307 18:53:18.089272  597042 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3823544109 --local dockerfile=/var/lib/minikube/build/build.3823544109 --output type=image,name=localhost/my-image:functional-788559: (1.849599787s)
I0307 18:53:18.089368  597042 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3823544109
I0307 18:53:18.099613  597042 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3823544109.tar
I0307 18:53:18.109367  597042 build_images.go:207] Built localhost/my-image:functional-788559 from /tmp/build.3823544109.tar
I0307 18:53:18.109396  597042 build_images.go:123] succeeded building to: functional-788559
I0307 18:53:18.109401  597042 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.197311504s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-788559
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image rm gcr.io/google-containers/addon-resizer:functional-788559 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-788559
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-788559 image save --daemon gcr.io/google-containers/addon-resizer:functional-788559 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-788559
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-788559
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-788559
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-788559
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (119.44s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-927586 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0307 18:53:46.478797  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-927586 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m58.589589903s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (119.44s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (41.57s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-927586 -- rollout status deployment/busybox: (38.141816419s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-964r4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-fbr76 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-pdqgc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-964r4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-fbr76 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-pdqgc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-964r4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-fbr76 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-pdqgc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (41.57s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-964r4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-964r4 -- sh -c "ping -c 1 192.168.49.1"
E0307 18:56:02.631045  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-fbr76 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-fbr76 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-pdqgc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-927586 -- exec busybox-5b5d89c9d6-pdqgc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (25.63s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-927586 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-927586 -v=7 --alsologtostderr: (24.575537438s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr: (1.056802856s)
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (25.63s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-927586 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0307 18:56:30.318987  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (19.78s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-927586 status --output json -v=7 --alsologtostderr: (1.058204709s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp testdata/cp-test.txt ha-927586:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3522633921/001/cp-test_ha-927586.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586:/home/docker/cp-test.txt ha-927586-m02:/home/docker/cp-test_ha-927586_ha-927586-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m02 "sudo cat /home/docker/cp-test_ha-927586_ha-927586-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586:/home/docker/cp-test.txt ha-927586-m03:/home/docker/cp-test_ha-927586_ha-927586-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m03 "sudo cat /home/docker/cp-test_ha-927586_ha-927586-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586:/home/docker/cp-test.txt ha-927586-m04:/home/docker/cp-test_ha-927586_ha-927586-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m04 "sudo cat /home/docker/cp-test_ha-927586_ha-927586-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp testdata/cp-test.txt ha-927586-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3522633921/001/cp-test_ha-927586-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m02:/home/docker/cp-test.txt ha-927586:/home/docker/cp-test_ha-927586-m02_ha-927586.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586 "sudo cat /home/docker/cp-test_ha-927586-m02_ha-927586.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m02:/home/docker/cp-test.txt ha-927586-m03:/home/docker/cp-test_ha-927586-m02_ha-927586-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m03 "sudo cat /home/docker/cp-test_ha-927586-m02_ha-927586-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m02:/home/docker/cp-test.txt ha-927586-m04:/home/docker/cp-test_ha-927586-m02_ha-927586-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m04 "sudo cat /home/docker/cp-test_ha-927586-m02_ha-927586-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp testdata/cp-test.txt ha-927586-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3522633921/001/cp-test_ha-927586-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m03:/home/docker/cp-test.txt ha-927586:/home/docker/cp-test_ha-927586-m03_ha-927586.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586 "sudo cat /home/docker/cp-test_ha-927586-m03_ha-927586.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m03:/home/docker/cp-test.txt ha-927586-m02:/home/docker/cp-test_ha-927586-m03_ha-927586-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m02 "sudo cat /home/docker/cp-test_ha-927586-m03_ha-927586-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m03:/home/docker/cp-test.txt ha-927586-m04:/home/docker/cp-test_ha-927586-m03_ha-927586-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m04 "sudo cat /home/docker/cp-test_ha-927586-m03_ha-927586-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp testdata/cp-test.txt ha-927586-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3522633921/001/cp-test_ha-927586-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m04:/home/docker/cp-test.txt ha-927586:/home/docker/cp-test_ha-927586-m04_ha-927586.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586 "sudo cat /home/docker/cp-test_ha-927586-m04_ha-927586.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m04:/home/docker/cp-test.txt ha-927586-m02:/home/docker/cp-test_ha-927586-m04_ha-927586-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m02 "sudo cat /home/docker/cp-test_ha-927586-m04_ha-927586-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 cp ha-927586-m04:/home/docker/cp-test.txt ha-927586-m03:/home/docker/cp-test_ha-927586-m04_ha-927586-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 ssh -n ha-927586-m03 "sudo cat /home/docker/cp-test_ha-927586-m04_ha-927586-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (19.78s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-927586 node stop m02 -v=7 --alsologtostderr: (12.095242357s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr: exit status 7 (749.51936ms)

                                                
                                                
-- stdout --
	ha-927586
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-927586-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927586-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-927586-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:57:02.414250  612487 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:57:02.414422  612487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:57:02.414432  612487 out.go:304] Setting ErrFile to fd 2...
	I0307 18:57:02.414439  612487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:57:02.414683  612487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 18:57:02.414921  612487 out.go:298] Setting JSON to false
	I0307 18:57:02.414954  612487 mustload.go:65] Loading cluster: ha-927586
	I0307 18:57:02.414995  612487 notify.go:220] Checking for updates...
	I0307 18:57:02.415498  612487 config.go:182] Loaded profile config "ha-927586": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 18:57:02.415512  612487 status.go:255] checking status of ha-927586 ...
	I0307 18:57:02.416274  612487 cli_runner.go:164] Run: docker container inspect ha-927586 --format={{.State.Status}}
	I0307 18:57:02.434493  612487 status.go:330] ha-927586 host status = "Running" (err=<nil>)
	I0307 18:57:02.434521  612487 host.go:66] Checking if "ha-927586" exists ...
	I0307 18:57:02.434893  612487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-927586
	I0307 18:57:02.451005  612487 host.go:66] Checking if "ha-927586" exists ...
	I0307 18:57:02.451333  612487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:57:02.451389  612487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-927586
	I0307 18:57:02.470909  612487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/ha-927586/id_rsa Username:docker}
	I0307 18:57:02.562913  612487 ssh_runner.go:195] Run: systemctl --version
	I0307 18:57:02.569603  612487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:57:02.581452  612487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:57:02.658891  612487 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:76 SystemTime:2024-03-07 18:57:02.648336685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:57:02.659573  612487 kubeconfig.go:125] found "ha-927586" server: "https://192.168.49.254:8443"
	I0307 18:57:02.659599  612487 api_server.go:166] Checking apiserver status ...
	I0307 18:57:02.659645  612487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:57:02.670669  612487 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1486/cgroup
	I0307 18:57:02.682054  612487 api_server.go:182] apiserver freezer: "6:freezer:/docker/4ef7abe01651b75c2a865808bde5656a022ab28b26f7aacad0df27a60c0650f4/kubepods/burstable/pod528a0dc3f492f247a3f7adc6a759356d/ef005a4b874d870fbe8c7b0e32445da88d7fec977de871e328f4b8f68cc22d6b"
	I0307 18:57:02.682180  612487 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4ef7abe01651b75c2a865808bde5656a022ab28b26f7aacad0df27a60c0650f4/kubepods/burstable/pod528a0dc3f492f247a3f7adc6a759356d/ef005a4b874d870fbe8c7b0e32445da88d7fec977de871e328f4b8f68cc22d6b/freezer.state
	I0307 18:57:02.692587  612487 api_server.go:204] freezer state: "THAWED"
	I0307 18:57:02.692624  612487 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0307 18:57:02.701081  612487 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0307 18:57:02.701110  612487 status.go:422] ha-927586 apiserver status = Running (err=<nil>)
	I0307 18:57:02.701121  612487 status.go:257] ha-927586 status: &{Name:ha-927586 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:57:02.701177  612487 status.go:255] checking status of ha-927586-m02 ...
	I0307 18:57:02.701512  612487 cli_runner.go:164] Run: docker container inspect ha-927586-m02 --format={{.State.Status}}
	I0307 18:57:02.718604  612487 status.go:330] ha-927586-m02 host status = "Stopped" (err=<nil>)
	I0307 18:57:02.718629  612487 status.go:343] host is not running, skipping remaining checks
	I0307 18:57:02.718637  612487 status.go:257] ha-927586-m02 status: &{Name:ha-927586-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:57:02.718675  612487 status.go:255] checking status of ha-927586-m03 ...
	I0307 18:57:02.718985  612487 cli_runner.go:164] Run: docker container inspect ha-927586-m03 --format={{.State.Status}}
	I0307 18:57:02.735252  612487 status.go:330] ha-927586-m03 host status = "Running" (err=<nil>)
	I0307 18:57:02.735279  612487 host.go:66] Checking if "ha-927586-m03" exists ...
	I0307 18:57:02.735693  612487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-927586-m03
	I0307 18:57:02.759059  612487 host.go:66] Checking if "ha-927586-m03" exists ...
	I0307 18:57:02.759383  612487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:57:02.759425  612487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-927586-m03
	I0307 18:57:02.775938  612487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/ha-927586-m03/id_rsa Username:docker}
	I0307 18:57:02.866819  612487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:57:02.879934  612487 kubeconfig.go:125] found "ha-927586" server: "https://192.168.49.254:8443"
	I0307 18:57:02.879963  612487 api_server.go:166] Checking apiserver status ...
	I0307 18:57:02.880007  612487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:57:02.891413  612487 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1300/cgroup
	I0307 18:57:02.900631  612487 api_server.go:182] apiserver freezer: "6:freezer:/docker/3acc37d001a9395e2f7da2415a23af8da14ae60b70d83ffd1f98b0d6963d7822/kubepods/burstable/pod76920495da02565cb0478fd52d132aac/b94886f877b66100f4e8ac95599d54a50952bfad041bd940a3b58734907bf2f3"
	I0307 18:57:02.900730  612487 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3acc37d001a9395e2f7da2415a23af8da14ae60b70d83ffd1f98b0d6963d7822/kubepods/burstable/pod76920495da02565cb0478fd52d132aac/b94886f877b66100f4e8ac95599d54a50952bfad041bd940a3b58734907bf2f3/freezer.state
	I0307 18:57:02.910428  612487 api_server.go:204] freezer state: "THAWED"
	I0307 18:57:02.910456  612487 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0307 18:57:02.919160  612487 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0307 18:57:02.919189  612487 status.go:422] ha-927586-m03 apiserver status = Running (err=<nil>)
	I0307 18:57:02.919199  612487 status.go:257] ha-927586-m03 status: &{Name:ha-927586-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:57:02.919239  612487 status.go:255] checking status of ha-927586-m04 ...
	I0307 18:57:02.919611  612487 cli_runner.go:164] Run: docker container inspect ha-927586-m04 --format={{.State.Status}}
	I0307 18:57:02.937887  612487 status.go:330] ha-927586-m04 host status = "Running" (err=<nil>)
	I0307 18:57:02.937911  612487 host.go:66] Checking if "ha-927586-m04" exists ...
	I0307 18:57:02.938206  612487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-927586-m04
	I0307 18:57:02.954154  612487 host.go:66] Checking if "ha-927586-m04" exists ...
	I0307 18:57:02.954452  612487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:57:02.954497  612487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-927586-m04
	I0307 18:57:02.973398  612487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33548 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/ha-927586-m04/id_rsa Username:docker}
	I0307 18:57:03.069594  612487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:57:03.082659  612487 status.go:257] ha-927586-m04 status: &{Name:ha-927586-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (18.79s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 node start m02 -v=7 --alsologtostderr
E0307 18:57:19.662457  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 18:57:19.667745  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 18:57:19.678484  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 18:57:19.699199  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 18:57:19.739461  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 18:57:19.819727  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 18:57:19.980250  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 18:57:20.301252  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 18:57:20.942462  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-927586 node start m02 -v=7 --alsologtostderr: (17.469579274s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr
E0307 18:57:22.222966  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr: (1.187968286s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (18.79s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (118.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-927586 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-927586 -v=7 --alsologtostderr
E0307 18:57:24.783098  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 18:57:29.903935  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 18:57:40.145664  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-927586 -v=7 --alsologtostderr: (26.259368421s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-927586 --wait=true -v=7 --alsologtostderr
E0307 18:58:00.625951  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 18:58:41.586955  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-927586 --wait=true -v=7 --alsologtostderr: (1m31.965576084s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-927586
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (118.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (11.33s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-927586 node delete m03 -v=7 --alsologtostderr: (10.243014864s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (11.33s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (35.91s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 stop -v=7 --alsologtostderr
E0307 19:00:03.507121  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-927586 stop -v=7 --alsologtostderr: (35.787227821s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr: exit status 7 (126.523698ms)

                                                
                                                
-- stdout --
	ha-927586
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927586-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927586-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:00:09.368639  625666 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:00:09.368780  625666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:00:09.368791  625666 out.go:304] Setting ErrFile to fd 2...
	I0307 19:00:09.368796  625666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:00:09.369019  625666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 19:00:09.369206  625666 out.go:298] Setting JSON to false
	I0307 19:00:09.369241  625666 mustload.go:65] Loading cluster: ha-927586
	I0307 19:00:09.369300  625666 notify.go:220] Checking for updates...
	I0307 19:00:09.369695  625666 config.go:182] Loaded profile config "ha-927586": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 19:00:09.369713  625666 status.go:255] checking status of ha-927586 ...
	I0307 19:00:09.370242  625666 cli_runner.go:164] Run: docker container inspect ha-927586 --format={{.State.Status}}
	I0307 19:00:09.394598  625666 status.go:330] ha-927586 host status = "Stopped" (err=<nil>)
	I0307 19:00:09.394621  625666 status.go:343] host is not running, skipping remaining checks
	I0307 19:00:09.394629  625666 status.go:257] ha-927586 status: &{Name:ha-927586 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:00:09.394664  625666 status.go:255] checking status of ha-927586-m02 ...
	I0307 19:00:09.394954  625666 cli_runner.go:164] Run: docker container inspect ha-927586-m02 --format={{.State.Status}}
	I0307 19:00:09.411036  625666 status.go:330] ha-927586-m02 host status = "Stopped" (err=<nil>)
	I0307 19:00:09.411060  625666 status.go:343] host is not running, skipping remaining checks
	I0307 19:00:09.411067  625666 status.go:257] ha-927586-m02 status: &{Name:ha-927586-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:00:09.411088  625666 status.go:255] checking status of ha-927586-m04 ...
	I0307 19:00:09.411405  625666 cli_runner.go:164] Run: docker container inspect ha-927586-m04 --format={{.State.Status}}
	I0307 19:00:09.437735  625666 status.go:330] ha-927586-m04 host status = "Stopped" (err=<nil>)
	I0307 19:00:09.437758  625666 status.go:343] host is not running, skipping remaining checks
	I0307 19:00:09.437773  625666 status.go:257] ha-927586-m04 status: &{Name:ha-927586-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (35.91s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (69.99s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-927586 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0307 19:01:02.631187  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-927586 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.969401194s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (69.99s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (42.86s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-927586 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-927586 --control-plane -v=7 --alsologtostderr: (41.652652244s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-927586 status -v=7 --alsologtostderr: (1.209424396s)
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (42.86s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.8s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-972039 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0307 19:02:19.663081  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 19:02:47.348264  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-972039 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (55.605458625s)
--- PASS: TestJSONOutput/start/Command (55.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-972039 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-972039 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-972039 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-972039 --output=json --user=testUser: (5.762580138s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-246005 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-246005 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.375426ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"78fa3404-8569-4c84-81eb-222ead997a43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-246005] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"043b7de9-d62c-465b-9e7d-ba841394703d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18239"}}
	{"specversion":"1.0","id":"b4d2d2e8-c97d-476a-81f6-e4d574e99123","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a2d7c75b-9c86-4f0a-a816-7dfdaf87fcd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig"}}
	{"specversion":"1.0","id":"a41830f4-954c-49d0-b29d-00f044d075b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube"}}
	{"specversion":"1.0","id":"a455b5ce-3c0a-47f7-bbe6-9f27d49feb10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4796f565-3e47-4542-b1db-18a38f83621e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"38d0deb1-bcae-4a8d-a7c5-85dcd7201be3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-246005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-246005
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-978056 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-978056 --network=: (40.667606505s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-978056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-978056
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-978056: (2.120916199s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.81s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-486475 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-486475 --network=bridge: (32.013624081s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-486475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-486475
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-486475: (1.926245392s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.96s)

                                                
                                    
x
+
TestKicExistingNetwork (33.85s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-863760 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-863760 --network=existing-network: (31.677539818s)
helpers_test.go:175: Cleaning up "existing-network-863760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-863760
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-863760: (2.033108622s)
--- PASS: TestKicExistingNetwork (33.85s)

                                                
                                    
x
+
TestKicCustomSubnet (35.01s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-724203 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-724203 --subnet=192.168.60.0/24: (32.912017272s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-724203 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-724203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-724203
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-724203: (2.078406887s)
--- PASS: TestKicCustomSubnet (35.01s)

                                                
                                    
x
+
TestKicStaticIP (37.85s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-129976 --static-ip=192.168.200.200
E0307 19:06:02.631908  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-129976 --static-ip=192.168.200.200: (35.669162505s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-129976 ip
helpers_test.go:175: Cleaning up "static-ip-129976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-129976
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-129976: (2.030801357s)
--- PASS: TestKicStaticIP (37.85s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (70.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-361355 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-361355 --driver=docker  --container-runtime=containerd: (30.653352901s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-363849 --driver=docker  --container-runtime=containerd
E0307 19:07:19.662720  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 19:07:25.680172  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-363849 --driver=docker  --container-runtime=containerd: (34.10403872s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-361355
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-363849
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-363849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-363849
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-363849: (1.97914398s)
helpers_test.go:175: Cleaning up "first-361355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-361355
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-361355: (2.229129688s)
--- PASS: TestMinikubeProfile (70.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-654342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-654342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.259116053s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-654342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-667979 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-667979 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.466554428s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-667979 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-654342 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-654342 --alsologtostderr -v=5: (1.612839031s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-667979 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-667979
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-667979: (1.206001152s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.38s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-667979
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-667979: (6.383316383s)
--- PASS: TestMountStart/serial/RestartStopped (7.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-667979 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-548356 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-548356 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.026940444s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-548356 -- rollout status deployment/busybox: (3.76965042s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- exec busybox-5b5d89c9d6-89bjk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- exec busybox-5b5d89c9d6-pmx7r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- exec busybox-5b5d89c9d6-89bjk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- exec busybox-5b5d89c9d6-pmx7r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- exec busybox-5b5d89c9d6-89bjk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- exec busybox-5b5d89c9d6-pmx7r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- exec busybox-5b5d89c9d6-89bjk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- exec busybox-5b5d89c9d6-89bjk -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- exec busybox-5b5d89c9d6-pmx7r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548356 -- exec busybox-5b5d89c9d6-pmx7r -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-548356 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-548356 -v 3 --alsologtostderr: (18.419496241s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-548356 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp testdata/cp-test.txt multinode-548356:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp multinode-548356:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1076999093/001/cp-test_multinode-548356.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp multinode-548356:/home/docker/cp-test.txt multinode-548356-m02:/home/docker/cp-test_multinode-548356_multinode-548356-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m02 "sudo cat /home/docker/cp-test_multinode-548356_multinode-548356-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp multinode-548356:/home/docker/cp-test.txt multinode-548356-m03:/home/docker/cp-test_multinode-548356_multinode-548356-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m03 "sudo cat /home/docker/cp-test_multinode-548356_multinode-548356-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp testdata/cp-test.txt multinode-548356-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp multinode-548356-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1076999093/001/cp-test_multinode-548356-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp multinode-548356-m02:/home/docker/cp-test.txt multinode-548356:/home/docker/cp-test_multinode-548356-m02_multinode-548356.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356 "sudo cat /home/docker/cp-test_multinode-548356-m02_multinode-548356.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp multinode-548356-m02:/home/docker/cp-test.txt multinode-548356-m03:/home/docker/cp-test_multinode-548356-m02_multinode-548356-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m03 "sudo cat /home/docker/cp-test_multinode-548356-m02_multinode-548356-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp testdata/cp-test.txt multinode-548356-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp multinode-548356-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1076999093/001/cp-test_multinode-548356-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp multinode-548356-m03:/home/docker/cp-test.txt multinode-548356:/home/docker/cp-test_multinode-548356-m03_multinode-548356.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356 "sudo cat /home/docker/cp-test_multinode-548356-m03_multinode-548356.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 cp multinode-548356-m03:/home/docker/cp-test.txt multinode-548356-m02:/home/docker/cp-test_multinode-548356-m03_multinode-548356-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 ssh -n multinode-548356-m02 "sudo cat /home/docker/cp-test_multinode-548356-m03_multinode-548356-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-548356 node stop m03: (1.215206159s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-548356 status: exit status 7 (522.197026ms)

                                                
                                                
-- stdout --
	multinode-548356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-548356-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-548356-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-548356 status --alsologtostderr: exit status 7 (501.988348ms)

                                                
                                                
-- stdout --
	multinode-548356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-548356-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-548356-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:09:52.545799  677381 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:09:52.545978  677381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:09:52.545998  677381 out.go:304] Setting ErrFile to fd 2...
	I0307 19:09:52.546025  677381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:09:52.546307  677381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 19:09:52.546521  677381 out.go:298] Setting JSON to false
	I0307 19:09:52.546588  677381 mustload.go:65] Loading cluster: multinode-548356
	I0307 19:09:52.546614  677381 notify.go:220] Checking for updates...
	I0307 19:09:52.547028  677381 config.go:182] Loaded profile config "multinode-548356": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 19:09:52.547059  677381 status.go:255] checking status of multinode-548356 ...
	I0307 19:09:52.547918  677381 cli_runner.go:164] Run: docker container inspect multinode-548356 --format={{.State.Status}}
	I0307 19:09:52.567096  677381 status.go:330] multinode-548356 host status = "Running" (err=<nil>)
	I0307 19:09:52.567139  677381 host.go:66] Checking if "multinode-548356" exists ...
	I0307 19:09:52.567464  677381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-548356
	I0307 19:09:52.584467  677381 host.go:66] Checking if "multinode-548356" exists ...
	I0307 19:09:52.584886  677381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 19:09:52.584950  677381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548356
	I0307 19:09:52.608610  677381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33653 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/multinode-548356/id_rsa Username:docker}
	I0307 19:09:52.698704  677381 ssh_runner.go:195] Run: systemctl --version
	I0307 19:09:52.702772  677381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 19:09:52.715225  677381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 19:09:52.769895  677381 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-03-07 19:09:52.760901971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 19:09:52.770478  677381 kubeconfig.go:125] found "multinode-548356" server: "https://192.168.58.2:8443"
	I0307 19:09:52.770505  677381 api_server.go:166] Checking apiserver status ...
	I0307 19:09:52.770551  677381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:09:52.781973  677381 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup
	I0307 19:09:52.791431  677381 api_server.go:182] apiserver freezer: "6:freezer:/docker/a64ce78f612721d0707bf62c66b6c13ecf36f414f8712d9a0a54479b8134035c/kubepods/burstable/podd5b620bc3fd15e266bcf095559e410ba/84d53baaa3270611325671407cc44251c2c5aabd8aae1f8605f8aeaf7f33775b"
	I0307 19:09:52.791512  677381 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a64ce78f612721d0707bf62c66b6c13ecf36f414f8712d9a0a54479b8134035c/kubepods/burstable/podd5b620bc3fd15e266bcf095559e410ba/84d53baaa3270611325671407cc44251c2c5aabd8aae1f8605f8aeaf7f33775b/freezer.state
	I0307 19:09:52.799969  677381 api_server.go:204] freezer state: "THAWED"
	I0307 19:09:52.800003  677381 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0307 19:09:52.808581  677381 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0307 19:09:52.808606  677381 status.go:422] multinode-548356 apiserver status = Running (err=<nil>)
	I0307 19:09:52.808618  677381 status.go:257] multinode-548356 status: &{Name:multinode-548356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:09:52.808637  677381 status.go:255] checking status of multinode-548356-m02 ...
	I0307 19:09:52.808946  677381 cli_runner.go:164] Run: docker container inspect multinode-548356-m02 --format={{.State.Status}}
	I0307 19:09:52.824707  677381 status.go:330] multinode-548356-m02 host status = "Running" (err=<nil>)
	I0307 19:09:52.824735  677381 host.go:66] Checking if "multinode-548356-m02" exists ...
	I0307 19:09:52.825036  677381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-548356-m02
	I0307 19:09:52.840761  677381 host.go:66] Checking if "multinode-548356-m02" exists ...
	I0307 19:09:52.841070  677381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 19:09:52.841117  677381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548356-m02
	I0307 19:09:52.859790  677381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33658 SSHKeyPath:/home/jenkins/minikube-integration/18239-558171/.minikube/machines/multinode-548356-m02/id_rsa Username:docker}
	I0307 19:09:52.950842  677381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 19:09:52.962331  677381 status.go:257] multinode-548356-m02 status: &{Name:multinode-548356-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:09:52.962367  677381 status.go:255] checking status of multinode-548356-m03 ...
	I0307 19:09:52.962678  677381 cli_runner.go:164] Run: docker container inspect multinode-548356-m03 --format={{.State.Status}}
	I0307 19:09:52.978350  677381 status.go:330] multinode-548356-m03 host status = "Stopped" (err=<nil>)
	I0307 19:09:52.978373  677381 status.go:343] host is not running, skipping remaining checks
	I0307 19:09:52.978380  677381 status.go:257] multinode-548356-m03 status: &{Name:multinode-548356-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-548356 node start m03 -v=7 --alsologtostderr: (8.679657613s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-548356
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-548356
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-548356: (24.94762024s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-548356 --wait=true -v=8 --alsologtostderr
E0307 19:11:02.631405  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-548356 --wait=true -v=8 --alsologtostderr: (54.904598936s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-548356
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-548356 node delete m03: (4.774303906s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-548356 stop: (23.837756649s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-548356 status: exit status 7 (96.489989ms)

                                                
                                                
-- stdout --
	multinode-548356
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-548356-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-548356 status --alsologtostderr: exit status 7 (89.481884ms)

                                                
                                                
-- stdout --
	multinode-548356
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-548356-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:11:51.916483  684955 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:11:51.916671  684955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:11:51.916691  684955 out.go:304] Setting ErrFile to fd 2...
	I0307 19:11:51.916709  684955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:11:51.916968  684955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 19:11:51.917184  684955 out.go:298] Setting JSON to false
	I0307 19:11:51.917247  684955 mustload.go:65] Loading cluster: multinode-548356
	I0307 19:11:51.917338  684955 notify.go:220] Checking for updates...
	I0307 19:11:51.917735  684955 config.go:182] Loaded profile config "multinode-548356": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 19:11:51.917777  684955 status.go:255] checking status of multinode-548356 ...
	I0307 19:11:51.918314  684955 cli_runner.go:164] Run: docker container inspect multinode-548356 --format={{.State.Status}}
	I0307 19:11:51.935757  684955 status.go:330] multinode-548356 host status = "Stopped" (err=<nil>)
	I0307 19:11:51.935778  684955 status.go:343] host is not running, skipping remaining checks
	I0307 19:11:51.935786  684955 status.go:257] multinode-548356 status: &{Name:multinode-548356 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:11:51.935817  684955 status.go:255] checking status of multinode-548356-m02 ...
	I0307 19:11:51.936164  684955 cli_runner.go:164] Run: docker container inspect multinode-548356-m02 --format={{.State.Status}}
	I0307 19:11:51.952260  684955 status.go:330] multinode-548356-m02 host status = "Stopped" (err=<nil>)
	I0307 19:11:51.952281  684955 status.go:343] host is not running, skipping remaining checks
	I0307 19:11:51.952289  684955 status.go:257] multinode-548356-m02 status: &{Name:multinode-548356-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-548356 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0307 19:12:19.662620  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-548356 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.164989276s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548356 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-548356
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-548356-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-548356-m02 --driver=docker  --container-runtime=containerd: exit status 14 (92.879247ms)

                                                
                                                
-- stdout --
	* [multinode-548356-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18239
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-548356-m02' is duplicated with machine name 'multinode-548356-m02' in profile 'multinode-548356'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-548356-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-548356-m03 --driver=docker  --container-runtime=containerd: (34.986987461s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-548356
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-548356: exit status 80 (327.989867ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-548356 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-548356-m03 already exists in multinode-548356-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-548356-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-548356-m03: (2.319092197s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.79s)

                                                
                                    
x
+
TestPreload (112.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-447750 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0307 19:13:42.708701  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-447750 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m15.510012819s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-447750 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-447750 image pull gcr.io/k8s-minikube/busybox: (1.240136487s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-447750
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-447750: (11.983671292s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-447750 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-447750 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.226353095s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-447750 image list
helpers_test.go:175: Cleaning up "test-preload-447750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-447750
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-447750: (2.481762614s)
--- PASS: TestPreload (112.79s)

                                                
                                    
x
+
TestScheduledStopUnix (105.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-687898 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-687898 --memory=2048 --driver=docker  --container-runtime=containerd: (29.815390551s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-687898 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-687898 -n scheduled-stop-687898
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-687898 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-687898 --cancel-scheduled
E0307 19:16:02.630974  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-687898 -n scheduled-stop-687898
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-687898
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-687898 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-687898
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-687898: exit status 7 (83.235034ms)

                                                
                                                
-- stdout --
	scheduled-stop-687898
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-687898 -n scheduled-stop-687898
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-687898 -n scheduled-stop-687898: exit status 7 (82.834478ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-687898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-687898
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-687898: (4.037874379s)
--- PASS: TestScheduledStopUnix (105.43s)

                                                
                                    
x
+
TestInsufficientStorage (11.67s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-432088 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-432088 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.182421752s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ca3270c8-bc79-4268-9e1f-a029d994c108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-432088] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6027f92e-38f6-49a4-a1ee-c57af9f07c30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18239"}}
	{"specversion":"1.0","id":"f539e0db-7c37-4b6b-b0e8-322334d2f6b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a32f0c37-8fbd-4f61-b6c2-ee9a2a89754c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig"}}
	{"specversion":"1.0","id":"c4c94052-cead-4b17-b93c-f4348c47d39d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube"}}
	{"specversion":"1.0","id":"a7ed9c59-bf5f-420f-b88d-4ca85426d9fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3afd87c8-56e9-49ad-bcd9-cbd3b8fa3ae3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d023f670-2e3b-4d16-935e-20718916b740","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"499d97de-6281-4dd8-af4e-4111d84befe0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"962ba128-0798-4a38-acc7-3c960d6f84f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0aa827d9-8481-4145-b1cd-d2ae0a085205","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a149a571-e65a-43ba-b2e2-4c29540b092b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-432088\" primary control-plane node in \"insufficient-storage-432088\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"11b25a7e-3d8d-4919-80ea-b56abfd55bd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708944392-18244 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a4b9266-7659-4c09-b9ee-694a08b761e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a580da6-33fd-4828-8b04-3d82e07ef18a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-432088 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-432088 --output=json --layout=cluster: exit status 7 (298.291935ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-432088","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-432088","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 19:17:15.290085  702547 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-432088" does not appear in /home/jenkins/minikube-integration/18239-558171/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-432088 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-432088 --output=json --layout=cluster: exit status 7 (301.12286ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-432088","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-432088","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 19:17:15.591892  702599 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-432088" does not appear in /home/jenkins/minikube-integration/18239-558171/kubeconfig
	E0307 19:17:15.602085  702599 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/insufficient-storage-432088/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-432088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-432088
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-432088: (1.886489037s)
--- PASS: TestInsufficientStorage (11.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (89.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3862224191 start -p running-upgrade-097922 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3862224191 start -p running-upgrade-097922 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.647699694s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-097922 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-097922 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.143987588s)
helpers_test.go:175: Cleaning up "running-upgrade-097922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-097922
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-097922: (2.861722392s)
--- PASS: TestRunningBinaryUpgrade (89.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (383.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-363026 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-363026 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.351401204s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-363026
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-363026: (2.049930202s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-363026 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-363026 status --format={{.Host}}: exit status 7 (87.360524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-363026 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-363026 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m3.818911345s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-363026 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-363026 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-363026 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (154.268968ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-363026] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18239
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-363026
	    minikube start -p kubernetes-upgrade-363026 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3630262 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-363026 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-363026 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-363026 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.821420976s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-363026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-363026
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-363026: (3.317683875s)
--- PASS: TestKubernetesUpgrade (383.80s)

                                                
                                    
x
+
TestMissingContainerUpgrade (171.95s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.555545161 start -p missing-upgrade-789231 --memory=2200 --driver=docker  --container-runtime=containerd
E0307 19:17:19.662846  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.555545161 start -p missing-upgrade-789231 --memory=2200 --driver=docker  --container-runtime=containerd: (1m21.968199431s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-789231
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-789231: (13.396460521s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-789231
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-789231 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-789231 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m12.95854832s)
helpers_test.go:175: Cleaning up "missing-upgrade-789231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-789231
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-789231: (2.468958987s)
--- PASS: TestMissingContainerUpgrade (171.95s)

                                                
                                    
x
+
TestPause/serial/Start (72.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-985336 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-985336 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m12.308378679s)
--- PASS: TestPause/serial/Start (72.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.93s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-985336 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-985336 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.921223105s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.93s)

                                                
                                    
x
+
TestPause/serial/Pause (1.06s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-985336 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-985336 --alsologtostderr -v=5: (1.059603826s)
--- PASS: TestPause/serial/Pause (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-985336 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-985336 --output=json --layout=cluster: exit status 2 (429.281328ms)

                                                
                                                
-- stdout --
	{"Name":"pause-985336","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-985336","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-985336 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.17s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-985336 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-985336 --alsologtostderr -v=5: (1.172528321s)
--- PASS: TestPause/serial/PauseAgain (1.17s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-985336 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-985336 --alsologtostderr -v=5: (2.764213931s)
--- PASS: TestPause/serial/DeletePaused (2.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.97s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (2.923920613s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-985336
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-985336: exit status 1 (14.165654ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-985336: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (128.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1550722 start -p stopped-upgrade-906478 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1550722 start -p stopped-upgrade-906478 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.772600712s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1550722 -p stopped-upgrade-906478 stop
E0307 19:21:02.631932  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1550722 -p stopped-upgrade-906478 stop: (19.961276786s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-906478 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-906478 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.675281795s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (128.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-906478
E0307 19:22:19.662666  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-906478: (1.677278737s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-129439 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-129439 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (132.259877ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-129439] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18239
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-129439 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-129439 --driver=docker  --container-runtime=containerd: (42.980251842s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-129439 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-483051 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-483051 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (228.179576ms)

                                                
                                                
-- stdout --
	* [false-483051] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18239
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:25:15.192549  741709 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:25:15.192811  741709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:25:15.192847  741709 out.go:304] Setting ErrFile to fd 2...
	I0307 19:25:15.192870  741709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:25:15.193170  741709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18239-558171/.minikube/bin
	I0307 19:25:15.194667  741709 out.go:298] Setting JSON to false
	I0307 19:25:15.195828  741709 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11259,"bootTime":1709828256,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 19:25:15.195947  741709 start.go:139] virtualization:  
	I0307 19:25:15.201284  741709 out.go:177] * [false-483051] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 19:25:15.204063  741709 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 19:25:15.204124  741709 notify.go:220] Checking for updates...
	I0307 19:25:15.208846  741709 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:25:15.211345  741709 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18239-558171/kubeconfig
	I0307 19:25:15.213332  741709 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18239-558171/.minikube
	I0307 19:25:15.215487  741709 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 19:25:15.217082  741709 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:25:15.219433  741709 config.go:182] Loaded profile config "NoKubernetes-129439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 19:25:15.219555  741709 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:25:15.249266  741709 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 19:25:15.249479  741709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 19:25:15.321129  741709 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 19:25:15.31075206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 19:25:15.321234  741709 docker.go:295] overlay module found
	I0307 19:25:15.323475  741709 out.go:177] * Using the docker driver based on user configuration
	I0307 19:25:15.325047  741709 start.go:297] selected driver: docker
	I0307 19:25:15.325065  741709 start.go:901] validating driver "docker" against <nil>
	I0307 19:25:15.325079  741709 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:25:15.327886  741709 out.go:177] 
	W0307 19:25:15.329827  741709 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0307 19:25:15.331764  741709 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-483051 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-483051" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-483051" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18239-558171/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Mar 2024 19:25:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-129439
contexts:
- context:
cluster: NoKubernetes-129439
extensions:
- extension:
last-update: Thu, 07 Mar 2024 19:25:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-129439
name: NoKubernetes-129439
current-context: NoKubernetes-129439
kind: Config
preferences: {}
users:
- name: NoKubernetes-129439
user:
client-certificate: /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/NoKubernetes-129439/client.crt
client-key: /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/NoKubernetes-129439/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-483051

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483051"

                                                
                                                
----------------------- debugLogs end: false-483051 [took: 4.263639745s] --------------------------------
helpers_test.go:175: Cleaning up "false-483051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-483051
--- PASS: TestNetworkPlugins/group/false (4.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-129439 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-129439 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.479960236s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-129439 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-129439 status -o json: exit status 2 (293.820282ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-129439","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-129439
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-129439: (1.932878364s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-129439 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-129439 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.028015112s)
--- PASS: TestNoKubernetes/serial/Start (9.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-129439 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-129439 "sudo systemctl is-active --quiet service kubelet": exit status 1 (411.919248ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-129439
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-129439: (1.254357773s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-129439 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-129439 --driver=docker  --container-runtime=containerd: (7.789833602s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-129439 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-129439 "sudo systemctl is-active --quiet service kubelet": exit status 1 (464.841209ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (174.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-490121 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0307 19:27:19.662732  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-490121 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m54.678071745s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (174.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-028045 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-028045 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m9.935475211s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-490121 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1cb30f5a-fc1d-4f62-945b-458ff2f994df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1cb30f5a-fc1d-4f62-945b-458ff2f994df] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003516463s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-490121 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-490121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-490121 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-490121 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-490121 --alsologtostderr -v=3: (12.07810414s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-028045 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d43c5cb0-93af-4b0d-b8fd-08169568d10e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0307 19:30:22.710151  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d43c5cb0-93af-4b0d-b8fd-08169568d10e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005822682s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-028045 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-490121 -n old-k8s-version-490121
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-490121 -n old-k8s-version-490121: exit status 7 (79.071964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-490121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-028045 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-028045 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.26011506s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-028045 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-028045 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-028045 --alsologtostderr -v=3: (12.26402123s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-028045 -n no-preload-028045
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-028045 -n no-preload-028045: exit status 7 (132.312979ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-028045 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-028045 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0307 19:31:02.632336  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 19:32:19.662841  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-028045 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (4m28.421429104s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-028045 -n no-preload-028045
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-r2zs2" [9e38fa1c-03b8-4fde-9e8b-14dc37948a57] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003587736s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-r2zs2" [9e38fa1c-03b8-4fde-9e8b-14dc37948a57] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003480915s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-028045 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-028045 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-028045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-028045 -n no-preload-028045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-028045 -n no-preload-028045: exit status 2 (335.582358ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-028045 -n no-preload-028045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-028045 -n no-preload-028045: exit status 2 (332.096117ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-028045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-028045 -n no-preload-028045
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-028045 -n no-preload-028045
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-327564 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0307 19:36:02.631646  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-327564 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m6.37748943s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pqw67" [078c3571-7f1f-4a04-b0b4-463c0ff25aaf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004057765s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-327564 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [12ca830c-1c6c-479a-8b82-2ee1b7cd90e0] Pending
helpers_test.go:344: "busybox" [12ca830c-1c6c-479a-8b82-2ee1b7cd90e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [12ca830c-1c6c-479a-8b82-2ee1b7cd90e0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004016021s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-327564 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pqw67" [078c3571-7f1f-4a04-b0b4-463c0ff25aaf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005742625s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-490121 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-327564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-327564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.021784193s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-327564 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-327564 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-327564 --alsologtostderr -v=3: (13.36878525s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-490121 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-490121 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-490121 -n old-k8s-version-490121
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-490121 -n old-k8s-version-490121: exit status 2 (305.924955ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-490121 -n old-k8s-version-490121
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-490121 -n old-k8s-version-490121: exit status 2 (304.123578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-490121 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-490121 -n old-k8s-version-490121
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-490121 -n old-k8s-version-490121
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-945014 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-945014 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m6.145639613s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-327564 -n embed-certs-327564
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-327564 -n embed-certs-327564: exit status 7 (100.322346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-327564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (294.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-327564 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0307 19:37:19.662160  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-327564 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m53.633487368s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-327564 -n embed-certs-327564
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (294.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-945014 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b2d08cb3-fc6d-4972-8be9-d608b747e813] Pending
helpers_test.go:344: "busybox" [b2d08cb3-fc6d-4972-8be9-d608b747e813] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b2d08cb3-fc6d-4972-8be9-d608b747e813] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004188654s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-945014 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-945014 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-945014 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.093143995s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-945014 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-945014 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-945014 --alsologtostderr -v=3: (12.009859903s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-945014 -n default-k8s-diff-port-945014
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-945014 -n default-k8s-diff-port-945014: exit status 7 (87.162515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-945014 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-945014 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0307 19:40:03.638667  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:03.644070  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:03.654322  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:03.674502  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:03.714896  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:03.795353  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:03.956372  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:04.277330  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:04.917955  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:06.198644  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:08.759714  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:13.880247  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:22.290638  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:22.295951  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:22.306207  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:22.326623  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:22.366880  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:22.447228  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:22.607648  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:22.928054  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:23.568922  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:24.120393  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:24.849648  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:27.410600  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:32.531511  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:42.772241  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:40:44.600920  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:40:45.682397  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 19:41:02.630939  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
E0307 19:41:03.253398  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:41:25.561656  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:41:44.214351  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-945014 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m3.761558734s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-945014 -n default-k8s-diff-port-945014
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-p2rvx" [6c12db53-2c74-4848-968f-9ef170091355] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004022571s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-p2rvx" [6c12db53-2c74-4848-968f-9ef170091355] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004840618s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-327564 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-327564 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-327564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-327564 -n embed-certs-327564
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-327564 -n embed-certs-327564: exit status 2 (321.826641ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-327564 -n embed-certs-327564
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-327564 -n embed-certs-327564: exit status 2 (343.41485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-327564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-327564 -n embed-certs-327564
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-327564 -n embed-certs-327564
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-485502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0307 19:42:19.662726  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 19:42:47.482753  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-485502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (45.748557521s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-485502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-485502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.33748535s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-485502 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-485502 --alsologtostderr -v=3: (1.473862286s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-485502 -n newest-cni-485502
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-485502 -n newest-cni-485502: exit status 7 (83.563079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-485502 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-485502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0307 19:43:06.135501  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-485502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (15.419816444s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-485502 -n newest-cni-485502
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-485502 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-485502 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-485502 -n newest-cni-485502
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-485502 -n newest-cni-485502: exit status 2 (346.541492ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-485502 -n newest-cni-485502
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-485502 -n newest-cni-485502: exit status 2 (348.513767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-485502 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-485502 -n newest-cni-485502
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-485502 -n newest-cni-485502
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (63.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m3.136936139s)
--- PASS: TestNetworkPlugins/group/auto/Start (63.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vpbc5" [35afece4-a2b2-417e-8122-20d96e56f843] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004435619s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vpbc5" [35afece4-a2b2-417e-8122-20d96e56f843] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004633603s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-945014 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-945014 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-945014 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-945014 -n default-k8s-diff-port-945014
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-945014 -n default-k8s-diff-port-945014: exit status 2 (464.60582ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-945014 -n default-k8s-diff-port-945014
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-945014 -n default-k8s-diff-port-945014: exit status 2 (406.232075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-945014 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-945014 --alsologtostderr -v=1: (1.033924496s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-945014 -n default-k8s-diff-port-945014
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-945014 -n default-k8s-diff-port-945014
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.01s)
E0307 19:49:23.627893  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:49:26.671787  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:26.677093  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:26.687330  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:26.707587  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:26.747861  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:26.828173  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:26.988655  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:27.309140  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:27.949820  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:29.230525  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:31.791250  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:36.911847  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m1.567953226s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-483051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-483051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fnhj9" [71888fff-f4a7-4037-a9de-a824e93440e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fnhj9" [71888fff-f4a7-4037-a9de-a824e93440e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004427943s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-483051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gj7b6" [19a16905-d342-4e5f-852f-ae0cc39442d8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004821993s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-483051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-483051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-znhqr" [7435ea06-f4b6-4255-8855-214a77be84cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-znhqr" [7435ea06-f4b6-4255-8855-214a77be84cb] Running
E0307 19:45:03.638318  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006677616s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m20.827917508s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-483051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0307 19:45:31.323288  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/old-k8s-version-490121/client.crt: no such file or directory
E0307 19:45:49.976068  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/no-preload-028045/client.crt: no such file or directory
E0307 19:46:02.631879  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/addons-678595/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m7.485039103s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jch2l" [e03e0919-ac24-40d7-9616-158f7a57e7a4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00626503s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-483051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-483051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4bdbx" [7e170098-1d43-4e9c-9d99-3956c949e6d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4bdbx" [7e170098-1d43-4e9c-9d99-3956c949e6d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004029559s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-483051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-483051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-483051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5s444" [880a84d7-95f2-4ea2-a9c1-de935535ad59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5s444" [880a84d7-95f2-4ea2-a9c1-de935535ad59] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.006074976s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-483051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0307 19:47:02.710997  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m29.863228446s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0307 19:47:19.662429  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/functional-788559/client.crt: no such file or directory
E0307 19:48:01.703194  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:48:01.709032  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:48:01.719275  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:48:01.739505  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:48:01.779734  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:48:01.860290  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:48:02.020623  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:48:02.341024  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:48:02.981708  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:48:04.262721  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:48:06.823523  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
E0307 19:48:11.944594  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.496558388s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-n6nn2" [0c72f6a2-73ed-49f2-abfa-1fab3b330c99] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00663123s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-483051 "pgrep -a kubelet"
E0307 19:48:22.185687  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/default-k8s-diff-port-945014/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-483051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xwfm8" [6bc075c8-9500-410d-8621-4c503af5e879] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xwfm8" [6bc075c8-9500-410d-8621-4c503af5e879] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004869765s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-483051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-483051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v7g69" [3568a7b5-4660-450c-9186-851ade6c8543] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v7g69" [3568a7b5-4660-450c-9186-851ade6c8543] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003438054s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-483051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-483051 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (45.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-483051 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (45.836673149s)
--- PASS: TestNetworkPlugins/group/bridge/Start (45.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-483051 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-483051 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4tb2x" [e42d44fc-acf9-45c2-8992-12d969f7eb6d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4tb2x" [e42d44fc-acf9-45c2-8992-12d969f7eb6d] Running
E0307 19:49:47.152713  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/auto-483051/client.crt: no such file or directory
E0307 19:49:47.635551  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/kindnet-483051/client.crt: no such file or directory
E0307 19:49:47.640815  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/kindnet-483051/client.crt: no such file or directory
E0307 19:49:47.651043  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/kindnet-483051/client.crt: no such file or directory
E0307 19:49:47.671321  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/kindnet-483051/client.crt: no such file or directory
E0307 19:49:47.711585  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/kindnet-483051/client.crt: no such file or directory
E0307 19:49:47.791890  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/kindnet-483051/client.crt: no such file or directory
E0307 19:49:47.952328  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/kindnet-483051/client.crt: no such file or directory
E0307 19:49:48.272827  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/kindnet-483051/client.crt: no such file or directory
E0307 19:49:48.913308  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/kindnet-483051/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004170207s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-483051 exec deployment/netcat -- nslookup kubernetes.default
E0307 19:49:50.194143  563581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/kindnet-483051/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-483051 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.77s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-197042 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-197042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-197042
--- SKIP: TestDownloadOnlyKic (0.77s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-314792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-314792
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-483051 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-483051" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-483051" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-483051

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483051"

                                                
                                                
----------------------- debugLogs end: kubenet-483051 [took: 4.983163667s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-483051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-483051
--- SKIP: TestNetworkPlugins/group/kubenet (5.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-483051 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-483051" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18239-558171/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Mar 2024 19:25:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-129439
contexts:
- context:
cluster: NoKubernetes-129439
extensions:
- extension:
last-update: Thu, 07 Mar 2024 19:25:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-129439
name: NoKubernetes-129439
current-context: NoKubernetes-129439
kind: Config
preferences: {}
users:
- name: NoKubernetes-129439
user:
client-certificate: /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/NoKubernetes-129439/client.crt
client-key: /home/jenkins/minikube-integration/18239-558171/.minikube/profiles/NoKubernetes-129439/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-483051

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-483051" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483051"

                                                
                                                
----------------------- debugLogs end: cilium-483051 [took: 3.972273509s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-483051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-483051
--- SKIP: TestNetworkPlugins/group/cilium (4.13s)

                                                
                                    
Copied to clipboard