Test Report: Docker_Linux_crio_arm64 18169

                    
                      248a87e642b5c2a9040ef2ce1129e71918aa65a4:2024-02-14:33129
                    
                

Test fail (3/314)

Order failed test Duration
39 TestAddons/parallel/Ingress 168.61
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 181.2
269 TestPause/serial/SecondStartNoReconfiguration 61
x
+
TestAddons/parallel/Ingress (168.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-956081 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-956081 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-956081 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e3dd01a3-80ef-431b-bca5-4314a57d7fc5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e3dd01a3-80ef-431b-bca5-4314a57d7fc5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003276494s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-956081 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.498101516s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-956081 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.064339297s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-956081 addons disable ingress-dns --alsologtostderr -v=1: (1.503658722s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-956081 addons disable ingress --alsologtostderr -v=1: (7.724417385s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-956081
helpers_test.go:235: (dbg) docker inspect addons-956081:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d3aad31f159bc205228e7ad4b5d677873b031939a0e0b8e43f888db9e6b8036",
	        "Created": "2024-02-14T00:19:15.757148528Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T00:19:16.043302982Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/3d3aad31f159bc205228e7ad4b5d677873b031939a0e0b8e43f888db9e6b8036/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d3aad31f159bc205228e7ad4b5d677873b031939a0e0b8e43f888db9e6b8036/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d3aad31f159bc205228e7ad4b5d677873b031939a0e0b8e43f888db9e6b8036/hosts",
	        "LogPath": "/var/lib/docker/containers/3d3aad31f159bc205228e7ad4b5d677873b031939a0e0b8e43f888db9e6b8036/3d3aad31f159bc205228e7ad4b5d677873b031939a0e0b8e43f888db9e6b8036-json.log",
	        "Name": "/addons-956081",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-956081:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-956081",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dba98b604172098a2f83ce5812ac163a7ff096bf37f4e7ecfee4e5a59a2a9066-init/diff:/var/lib/docker/overlay2/6bce6236d7ba68734b2ab000b848b0bb40e1e541964b0b25c50d016c8f0ef97c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dba98b604172098a2f83ce5812ac163a7ff096bf37f4e7ecfee4e5a59a2a9066/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dba98b604172098a2f83ce5812ac163a7ff096bf37f4e7ecfee4e5a59a2a9066/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dba98b604172098a2f83ce5812ac163a7ff096bf37f4e7ecfee4e5a59a2a9066/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-956081",
	                "Source": "/var/lib/docker/volumes/addons-956081/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-956081",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-956081",
	                "name.minikube.sigs.k8s.io": "addons-956081",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b1ae75a4a54edcbfcbdccaf7ad65864b652857d01f8222e8baefada5e9bcd68",
	            "SandboxKey": "/var/run/docker/netns/9b1ae75a4a54",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-956081": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3d3aad31f159",
	                        "addons-956081"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "4a26973926e9a29d9902d94f36c2a42dccd30e28112be66a48bdc54485bb83fd",
	                    "EndpointID": "189fb159178547907c1e72b58d75437e0bfac4b2653640a87be18f50e4e81dc7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-956081",
	                        "3d3aad31f159"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-956081 -n addons-956081
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-956081 logs -n 25: (1.502786494s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-842602                                                                     | download-only-842602   | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:18 UTC |
	| delete  | -p download-only-857203                                                                     | download-only-857203   | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:18 UTC |
	| delete  | -p download-only-594877                                                                     | download-only-594877   | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:18 UTC |
	| start   | --download-only -p                                                                          | download-docker-289549 | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC |                     |
	|         | download-docker-289549                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-289549                                                                   | download-docker-289549 | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:18 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-919425   | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC |                     |
	|         | binary-mirror-919425                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44395                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-919425                                                                     | binary-mirror-919425   | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:18 UTC |
	| addons  | enable dashboard -p                                                                         | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC |                     |
	|         | addons-956081                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC |                     |
	|         | addons-956081                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-956081 --wait=true                                                                | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:21 UTC | 14 Feb 24 00:21 UTC |
	|         | -p addons-956081                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-956081 ip                                                                            | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:21 UTC | 14 Feb 24 00:21 UTC |
	| addons  | addons-956081 addons disable                                                                | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:21 UTC | 14 Feb 24 00:21 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:21 UTC | 14 Feb 24 00:21 UTC |
	|         | -p addons-956081                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-956081 ssh cat                                                                       | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:21 UTC | 14 Feb 24 00:21 UTC |
	|         | /opt/local-path-provisioner/pvc-212cb541-02a7-4781-88d4-17a5a71edc4b_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-956081 addons disable                                                                | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:21 UTC | 14 Feb 24 00:22 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:21 UTC | 14 Feb 24 00:21 UTC |
	|         | addons-956081                                                                               |                        |         |         |                     |                     |
	| addons  | addons-956081 addons                                                                        | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:22 UTC | 14 Feb 24 00:22 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:22 UTC | 14 Feb 24 00:22 UTC |
	|         | addons-956081                                                                               |                        |         |         |                     |                     |
	| addons  | addons-956081 addons                                                                        | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:22 UTC | 14 Feb 24 00:22 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-956081 addons                                                                        | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:22 UTC | 14 Feb 24 00:22 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-956081 ssh curl -s                                                                   | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:23 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-956081 ip                                                                            | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:25 UTC | 14 Feb 24 00:25 UTC |
	| addons  | addons-956081 addons disable                                                                | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:25 UTC | 14 Feb 24 00:25 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-956081 addons disable                                                                | addons-956081          | jenkins | v1.32.0 | 14 Feb 24 00:25 UTC | 14 Feb 24 00:25 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 00:18:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 00:18:52.533580  504876 out.go:291] Setting OutFile to fd 1 ...
	I0214 00:18:52.533804  504876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:18:52.533830  504876 out.go:304] Setting ErrFile to fd 2...
	I0214 00:18:52.533853  504876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:18:52.534111  504876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 00:18:52.534572  504876 out.go:298] Setting JSON to false
	I0214 00:18:52.535524  504876 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10876,"bootTime":1707859057,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0214 00:18:52.535621  504876 start.go:138] virtualization:  
	I0214 00:18:52.538257  504876 out.go:177] * [addons-956081] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 00:18:52.541299  504876 out.go:177]   - MINIKUBE_LOCATION=18169
	I0214 00:18:52.543136  504876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 00:18:52.541357  504876 notify.go:220] Checking for updates...
	I0214 00:18:52.547728  504876 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 00:18:52.549596  504876 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	I0214 00:18:52.551535  504876 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 00:18:52.553881  504876 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 00:18:52.556144  504876 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 00:18:52.576083  504876 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 00:18:52.576189  504876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:18:52.646183  504876 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 00:18:52.637495033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:18:52.646289  504876 docker.go:295] overlay module found
	I0214 00:18:52.648997  504876 out.go:177] * Using the docker driver based on user configuration
	I0214 00:18:52.650744  504876 start.go:298] selected driver: docker
	I0214 00:18:52.650764  504876 start.go:902] validating driver "docker" against <nil>
	I0214 00:18:52.650778  504876 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 00:18:52.651452  504876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:18:52.712466  504876 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 00:18:52.703122772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:18:52.712624  504876 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 00:18:52.712847  504876 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 00:18:52.715032  504876 out.go:177] * Using Docker driver with root privileges
	I0214 00:18:52.717269  504876 cni.go:84] Creating CNI manager for ""
	I0214 00:18:52.717291  504876 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 00:18:52.717302  504876 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 00:18:52.717319  504876 start_flags.go:321] config:
	{Name:addons-956081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-956081 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 00:18:52.720837  504876 out.go:177] * Starting control plane node addons-956081 in cluster addons-956081
	I0214 00:18:52.722751  504876 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 00:18:52.724792  504876 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 00:18:52.726536  504876 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0214 00:18:52.726591  504876 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0214 00:18:52.726604  504876 cache.go:56] Caching tarball of preloaded images
	I0214 00:18:52.726625  504876 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 00:18:52.726681  504876 preload.go:174] Found /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0214 00:18:52.726691  504876 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0214 00:18:52.727044  504876 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/config.json ...
	I0214 00:18:52.727108  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/config.json: {Name:mk7c9100d86acf7ce32712960b0afce51fb3379d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:18:52.744007  504876 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 00:18:52.744125  504876 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 00:18:52.744148  504876 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0214 00:18:52.744156  504876 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0214 00:18:52.744164  504876 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 00:18:52.744177  504876 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0214 00:19:08.508038  504876 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0214 00:19:08.508085  504876 cache.go:194] Successfully downloaded all kic artifacts
	I0214 00:19:08.508121  504876 start.go:365] acquiring machines lock for addons-956081: {Name:mk895e3040ea25952307290e8a0f1f00d0f4892f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 00:19:08.508245  504876 start.go:369] acquired machines lock for "addons-956081" in 100.635µs
	I0214 00:19:08.508281  504876 start.go:93] Provisioning new machine with config: &{Name:addons-956081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-956081 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 00:19:08.508406  504876 start.go:125] createHost starting for "" (driver="docker")
	I0214 00:19:08.510861  504876 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0214 00:19:08.511127  504876 start.go:159] libmachine.API.Create for "addons-956081" (driver="docker")
	I0214 00:19:08.511169  504876 client.go:168] LocalClient.Create starting
	I0214 00:19:08.511283  504876 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem
	I0214 00:19:08.985022  504876 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem
	I0214 00:19:09.453664  504876 cli_runner.go:164] Run: docker network inspect addons-956081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0214 00:19:09.470134  504876 cli_runner.go:211] docker network inspect addons-956081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0214 00:19:09.470229  504876 network_create.go:281] running [docker network inspect addons-956081] to gather additional debugging logs...
	I0214 00:19:09.470254  504876 cli_runner.go:164] Run: docker network inspect addons-956081
	W0214 00:19:09.485585  504876 cli_runner.go:211] docker network inspect addons-956081 returned with exit code 1
	I0214 00:19:09.485621  504876 network_create.go:284] error running [docker network inspect addons-956081]: docker network inspect addons-956081: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-956081 not found
	I0214 00:19:09.485636  504876 network_create.go:286] output of [docker network inspect addons-956081]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-956081 not found
	
	** /stderr **
	I0214 00:19:09.485751  504876 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 00:19:09.500826  504876 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025787f0}
	I0214 00:19:09.500868  504876 network_create.go:124] attempt to create docker network addons-956081 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0214 00:19:09.500928  504876 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-956081 addons-956081
	I0214 00:19:09.565405  504876 network_create.go:108] docker network addons-956081 192.168.49.0/24 created
	I0214 00:19:09.565443  504876 kic.go:121] calculated static IP "192.168.49.2" for the "addons-956081" container
	I0214 00:19:09.565538  504876 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0214 00:19:09.580508  504876 cli_runner.go:164] Run: docker volume create addons-956081 --label name.minikube.sigs.k8s.io=addons-956081 --label created_by.minikube.sigs.k8s.io=true
	I0214 00:19:09.596445  504876 oci.go:103] Successfully created a docker volume addons-956081
	I0214 00:19:09.596544  504876 cli_runner.go:164] Run: docker run --rm --name addons-956081-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-956081 --entrypoint /usr/bin/test -v addons-956081:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0214 00:19:11.483825  504876 cli_runner.go:217] Completed: docker run --rm --name addons-956081-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-956081 --entrypoint /usr/bin/test -v addons-956081:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.887237619s)
	I0214 00:19:11.483864  504876 oci.go:107] Successfully prepared a docker volume addons-956081
	I0214 00:19:11.483884  504876 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0214 00:19:11.483903  504876 kic.go:194] Starting extracting preloaded images to volume ...
	I0214 00:19:11.483979  504876 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-956081:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0214 00:19:15.687925  504876 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-956081:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.203896778s)
	I0214 00:19:15.687969  504876 kic.go:203] duration metric: took 4.204051 seconds to extract preloaded images to volume
	W0214 00:19:15.688115  504876 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0214 00:19:15.688231  504876 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0214 00:19:15.743906  504876 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-956081 --name addons-956081 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-956081 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-956081 --network addons-956081 --ip 192.168.49.2 --volume addons-956081:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0214 00:19:16.051208  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Running}}
	I0214 00:19:16.076180  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:16.101000  504876 cli_runner.go:164] Run: docker exec addons-956081 stat /var/lib/dpkg/alternatives/iptables
	I0214 00:19:16.159480  504876 oci.go:144] the created container "addons-956081" has a running status.
	I0214 00:19:16.159506  504876 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa...
	I0214 00:19:16.701251  504876 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0214 00:19:16.722078  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:16.742880  504876 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0214 00:19:16.742900  504876 kic_runner.go:114] Args: [docker exec --privileged addons-956081 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0214 00:19:16.803321  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:16.827210  504876 machine.go:88] provisioning docker machine ...
	I0214 00:19:16.827252  504876 ubuntu.go:169] provisioning hostname "addons-956081"
	I0214 00:19:16.827322  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:16.851983  504876 main.go:141] libmachine: Using SSH client type: native
	I0214 00:19:16.852411  504876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33392 <nil> <nil>}
	I0214 00:19:16.852424  504876 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-956081 && echo "addons-956081" | sudo tee /etc/hostname
	I0214 00:19:17.024897  504876 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-956081
	
	I0214 00:19:17.025035  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:17.051592  504876 main.go:141] libmachine: Using SSH client type: native
	I0214 00:19:17.051998  504876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33392 <nil> <nil>}
	I0214 00:19:17.052017  504876 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-956081' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-956081/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-956081' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 00:19:17.186157  504876 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 00:19:17.186187  504876 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18169-498689/.minikube CaCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18169-498689/.minikube}
	I0214 00:19:17.186213  504876 ubuntu.go:177] setting up certificates
	I0214 00:19:17.186231  504876 provision.go:83] configureAuth start
	I0214 00:19:17.186291  504876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-956081
	I0214 00:19:17.205426  504876 provision.go:138] copyHostCerts
	I0214 00:19:17.205508  504876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/ca.pem (1078 bytes)
	I0214 00:19:17.205651  504876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/cert.pem (1123 bytes)
	I0214 00:19:17.205745  504876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/key.pem (1675 bytes)
	I0214 00:19:17.205827  504876 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem org=jenkins.addons-956081 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-956081]
	I0214 00:19:17.572865  504876 provision.go:172] copyRemoteCerts
	I0214 00:19:17.572931  504876 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 00:19:17.572986  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:17.588495  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:17.686591  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0214 00:19:17.710888  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0214 00:19:17.735926  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 00:19:17.759670  504876 provision.go:86] duration metric: configureAuth took 573.425569ms
	I0214 00:19:17.759698  504876 ubuntu.go:193] setting minikube options for container-runtime
	I0214 00:19:17.759877  504876 config.go:182] Loaded profile config "addons-956081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 00:19:17.759987  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:17.775830  504876 main.go:141] libmachine: Using SSH client type: native
	I0214 00:19:17.776259  504876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33392 <nil> <nil>}
	I0214 00:19:17.776282  504876 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 00:19:18.011110  504876 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 00:19:18.011149  504876 machine.go:91] provisioned docker machine in 1.183901481s
	I0214 00:19:18.011162  504876 client.go:171] LocalClient.Create took 9.499986448s
	I0214 00:19:18.011180  504876 start.go:167] duration metric: libmachine.API.Create for "addons-956081" took 9.500054762s
	I0214 00:19:18.011193  504876 start.go:300] post-start starting for "addons-956081" (driver="docker")
	I0214 00:19:18.011205  504876 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 00:19:18.011281  504876 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 00:19:18.011324  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:18.030784  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:18.127790  504876 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 00:19:18.131125  504876 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 00:19:18.131199  504876 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 00:19:18.131213  504876 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 00:19:18.131222  504876 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 00:19:18.131233  504876 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-498689/.minikube/addons for local assets ...
	I0214 00:19:18.131306  504876 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-498689/.minikube/files for local assets ...
	I0214 00:19:18.131335  504876 start.go:303] post-start completed in 120.136562ms
	I0214 00:19:18.131669  504876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-956081
	I0214 00:19:18.148091  504876 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/config.json ...
	I0214 00:19:18.148392  504876 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 00:19:18.148446  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:18.172189  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:18.262203  504876 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 00:19:18.266476  504876 start.go:128] duration metric: createHost completed in 9.758033695s
	I0214 00:19:18.266549  504876 start.go:83] releasing machines lock for "addons-956081", held for 9.758288166s
	I0214 00:19:18.266646  504876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-956081
	I0214 00:19:18.281858  504876 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 00:19:18.281944  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:18.282157  504876 ssh_runner.go:195] Run: cat /version.json
	I0214 00:19:18.282192  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:18.301545  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:18.307684  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:18.524959  504876 ssh_runner.go:195] Run: systemctl --version
	I0214 00:19:18.529381  504876 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 00:19:18.670868  504876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 00:19:18.675005  504876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 00:19:18.694575  504876 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0214 00:19:18.694653  504876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 00:19:18.725347  504876 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0214 00:19:18.725370  504876 start.go:475] detecting cgroup driver to use...
	I0214 00:19:18.725403  504876 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 00:19:18.725464  504876 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 00:19:18.741052  504876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 00:19:18.753922  504876 docker.go:217] disabling cri-docker service (if available) ...
	I0214 00:19:18.753992  504876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 00:19:18.767836  504876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 00:19:18.782339  504876 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 00:19:18.867838  504876 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 00:19:18.959631  504876 docker.go:233] disabling docker service ...
	I0214 00:19:18.959724  504876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 00:19:18.979115  504876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 00:19:18.991089  504876 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 00:19:19.072027  504876 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 00:19:19.173010  504876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 00:19:19.184991  504876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 00:19:19.201442  504876 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0214 00:19:19.201509  504876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 00:19:19.211979  504876 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 00:19:19.212051  504876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 00:19:19.222478  504876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 00:19:19.232105  504876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 00:19:19.243936  504876 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 00:19:19.254964  504876 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 00:19:19.263774  504876 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 00:19:19.273067  504876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 00:19:19.351610  504876 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 00:19:19.451305  504876 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 00:19:19.451446  504876 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 00:19:19.455722  504876 start.go:543] Will wait 60s for crictl version
	I0214 00:19:19.455865  504876 ssh_runner.go:195] Run: which crictl
	I0214 00:19:19.459367  504876 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 00:19:19.501872  504876 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0214 00:19:19.501966  504876 ssh_runner.go:195] Run: crio --version
	I0214 00:19:19.541911  504876 ssh_runner.go:195] Run: crio --version
	I0214 00:19:19.586987  504876 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0214 00:19:19.589351  504876 cli_runner.go:164] Run: docker network inspect addons-956081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 00:19:19.604294  504876 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0214 00:19:19.607961  504876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 00:19:19.618993  504876 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0214 00:19:19.619072  504876 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 00:19:19.686149  504876 crio.go:496] all images are preloaded for cri-o runtime.
	I0214 00:19:19.686170  504876 crio.go:415] Images already preloaded, skipping extraction
	I0214 00:19:19.686225  504876 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 00:19:19.722413  504876 crio.go:496] all images are preloaded for cri-o runtime.
	I0214 00:19:19.722437  504876 cache_images.go:84] Images are preloaded, skipping loading
	I0214 00:19:19.722517  504876 ssh_runner.go:195] Run: crio config
	I0214 00:19:19.794982  504876 cni.go:84] Creating CNI manager for ""
	I0214 00:19:19.795008  504876 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 00:19:19.795048  504876 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 00:19:19.795075  504876 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-956081 NodeName:addons-956081 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 00:19:19.795266  504876 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-956081"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 00:19:19.795336  504876 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-956081 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-956081 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0214 00:19:19.795410  504876 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0214 00:19:19.804167  504876 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 00:19:19.804263  504876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 00:19:19.812754  504876 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0214 00:19:19.830728  504876 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 00:19:19.847787  504876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0214 00:19:19.865149  504876 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0214 00:19:19.868331  504876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 00:19:19.878721  504876 certs.go:56] Setting up /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081 for IP: 192.168.49.2
	I0214 00:19:19.878755  504876 certs.go:190] acquiring lock for shared ca certs: {Name:mk24bda5a01a6d67ca318fbbda66875cef4a1a9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:19.878910  504876 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18169-498689/.minikube/ca.key
	I0214 00:19:20.530167  504876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt ...
	I0214 00:19:20.530198  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt: {Name:mk659bb47b8c0bdf39929661f0c5e302abe77cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:20.530408  504876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-498689/.minikube/ca.key ...
	I0214 00:19:20.530423  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/ca.key: {Name:mk6fbec12cae299933078fc78d715b2d1a3a1271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:20.531177  504876 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.key
	I0214 00:19:20.930630  504876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.crt ...
	I0214 00:19:20.930660  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.crt: {Name:mkae82d462037576485fa5609961ea75c0b7169b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:20.930850  504876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.key ...
	I0214 00:19:20.930862  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.key: {Name:mk138526a9fdf5c36908dafc7b6edc56c62a4e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:20.931487  504876 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.key
	I0214 00:19:20.931506  504876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt with IP's: []
	I0214 00:19:21.588331  504876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt ...
	I0214 00:19:21.588365  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: {Name:mk6f668b505a1c3244b4acea1dbbfb35dd8182d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:21.589087  504876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.key ...
	I0214 00:19:21.589103  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.key: {Name:mk0e3eb7b641677550cf930f30f7fbcca1d68018 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:21.589605  504876 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.key.dd3b5fb2
	I0214 00:19:21.589630  504876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0214 00:19:21.834518  504876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.crt.dd3b5fb2 ...
	I0214 00:19:21.834548  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.crt.dd3b5fb2: {Name:mkd110ce93c5109dd8c31f7f406f14ac2ec85c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:21.834740  504876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.key.dd3b5fb2 ...
	I0214 00:19:21.834758  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.key.dd3b5fb2: {Name:mke8e61ccf85228e95123c83a382e0917ebfa5db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:21.834842  504876 certs.go:337] copying /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.crt
	I0214 00:19:21.834918  504876 certs.go:341] copying /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.key
	I0214 00:19:21.834977  504876 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/proxy-client.key
	I0214 00:19:21.834997  504876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/proxy-client.crt with IP's: []
	I0214 00:19:22.073966  504876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/proxy-client.crt ...
	I0214 00:19:22.074045  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/proxy-client.crt: {Name:mkdc8948c21cf46ca0d262a3a95ea86bebd51298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:22.074239  504876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/proxy-client.key ...
	I0214 00:19:22.074254  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/proxy-client.key: {Name:mk523afdda9e2a7d7c8f4b1eb7b224b7183842d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:22.074456  504876 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem (1679 bytes)
	I0214 00:19:22.074504  504876 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem (1078 bytes)
	I0214 00:19:22.074536  504876 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem (1123 bytes)
	I0214 00:19:22.074567  504876 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem (1675 bytes)
	I0214 00:19:22.075158  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 00:19:22.100510  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 00:19:22.123980  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 00:19:22.149077  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 00:19:22.172911  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 00:19:22.197608  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 00:19:22.221791  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 00:19:22.246468  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0214 00:19:22.271102  504876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 00:19:22.295669  504876 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 00:19:22.313711  504876 ssh_runner.go:195] Run: openssl version
	I0214 00:19:22.319239  504876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 00:19:22.328455  504876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 00:19:22.331845  504876 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 00:19 /usr/share/ca-certificates/minikubeCA.pem
	I0214 00:19:22.331958  504876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 00:19:22.338813  504876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 00:19:22.348270  504876 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 00:19:22.351436  504876 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0214 00:19:22.351484  504876 kubeadm.go:404] StartCluster: {Name:addons-956081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-956081 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 00:19:22.351558  504876 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 00:19:22.351615  504876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 00:19:22.392937  504876 cri.go:89] found id: ""
	I0214 00:19:22.393007  504876 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 00:19:22.401648  504876 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 00:19:22.410081  504876 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0214 00:19:22.410146  504876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 00:19:22.418630  504876 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 00:19:22.418709  504876 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0214 00:19:22.508080  504876 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0214 00:19:22.583092  504876 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 00:19:38.885875  504876 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0214 00:19:38.885934  504876 kubeadm.go:322] [preflight] Running pre-flight checks
	I0214 00:19:38.886022  504876 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0214 00:19:38.886078  504876 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0214 00:19:38.886114  504876 kubeadm.go:322] OS: Linux
	I0214 00:19:38.886161  504876 kubeadm.go:322] CGROUPS_CPU: enabled
	I0214 00:19:38.886210  504876 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0214 00:19:38.886259  504876 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0214 00:19:38.886308  504876 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0214 00:19:38.886357  504876 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0214 00:19:38.886406  504876 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0214 00:19:38.886453  504876 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0214 00:19:38.886501  504876 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0214 00:19:38.886547  504876 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0214 00:19:38.886618  504876 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 00:19:38.886709  504876 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 00:19:38.886799  504876 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 00:19:38.886860  504876 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 00:19:38.889536  504876 out.go:204]   - Generating certificates and keys ...
	I0214 00:19:38.889632  504876 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0214 00:19:38.889700  504876 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0214 00:19:38.889813  504876 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 00:19:38.889873  504876 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0214 00:19:38.889935  504876 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0214 00:19:38.889987  504876 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0214 00:19:38.890042  504876 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0214 00:19:38.890155  504876 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-956081 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 00:19:38.890208  504876 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0214 00:19:38.890317  504876 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-956081 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 00:19:38.890391  504876 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 00:19:38.890453  504876 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 00:19:38.890498  504876 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0214 00:19:38.890553  504876 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 00:19:38.890604  504876 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 00:19:38.890659  504876 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 00:19:38.890727  504876 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 00:19:38.890781  504876 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 00:19:38.890860  504876 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 00:19:38.890924  504876 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 00:19:38.893172  504876 out.go:204]   - Booting up control plane ...
	I0214 00:19:38.893284  504876 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 00:19:38.893388  504876 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 00:19:38.893466  504876 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 00:19:38.893586  504876 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 00:19:38.893678  504876 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 00:19:38.893733  504876 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0214 00:19:38.893887  504876 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 00:19:38.893969  504876 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502858 seconds
	I0214 00:19:38.894080  504876 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 00:19:38.894208  504876 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 00:19:38.894269  504876 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 00:19:38.894456  504876 kubeadm.go:322] [mark-control-plane] Marking the node addons-956081 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 00:19:38.894519  504876 kubeadm.go:322] [bootstrap-token] Using token: ksy5og.vtsgjk25rw8f5rur
	I0214 00:19:38.896397  504876 out.go:204]   - Configuring RBAC rules ...
	I0214 00:19:38.896515  504876 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 00:19:38.896606  504876 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 00:19:38.896748  504876 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 00:19:38.896883  504876 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 00:19:38.897006  504876 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 00:19:38.897102  504876 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 00:19:38.897220  504876 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 00:19:38.897266  504876 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0214 00:19:38.897315  504876 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0214 00:19:38.897323  504876 kubeadm.go:322] 
	I0214 00:19:38.897383  504876 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0214 00:19:38.897392  504876 kubeadm.go:322] 
	I0214 00:19:38.897469  504876 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0214 00:19:38.897477  504876 kubeadm.go:322] 
	I0214 00:19:38.897502  504876 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0214 00:19:38.897564  504876 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 00:19:38.897617  504876 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 00:19:38.897625  504876 kubeadm.go:322] 
	I0214 00:19:38.897679  504876 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0214 00:19:38.897686  504876 kubeadm.go:322] 
	I0214 00:19:38.897792  504876 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 00:19:38.897802  504876 kubeadm.go:322] 
	I0214 00:19:38.897854  504876 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0214 00:19:38.897931  504876 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 00:19:38.898002  504876 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 00:19:38.898011  504876 kubeadm.go:322] 
	I0214 00:19:38.898097  504876 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 00:19:38.898176  504876 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0214 00:19:38.898184  504876 kubeadm.go:322] 
	I0214 00:19:38.898267  504876 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ksy5og.vtsgjk25rw8f5rur \
	I0214 00:19:38.898372  504876 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:44f9d2d2d797c45382846d4b51b4e7b005961554b46257e185c55dad3bb0bd1d \
	I0214 00:19:38.898398  504876 kubeadm.go:322] 	--control-plane 
	I0214 00:19:38.898406  504876 kubeadm.go:322] 
	I0214 00:19:38.898490  504876 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0214 00:19:38.898497  504876 kubeadm.go:322] 
	I0214 00:19:38.898579  504876 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ksy5og.vtsgjk25rw8f5rur \
	I0214 00:19:38.898695  504876 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:44f9d2d2d797c45382846d4b51b4e7b005961554b46257e185c55dad3bb0bd1d 
	I0214 00:19:38.898707  504876 cni.go:84] Creating CNI manager for ""
	I0214 00:19:38.898715  504876 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 00:19:38.900799  504876 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 00:19:38.902960  504876 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 00:19:38.916673  504876 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0214 00:19:38.916700  504876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0214 00:19:38.969202  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 00:19:39.841106  504876 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 00:19:39.841226  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:39.841226  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802 minikube.k8s.io/name=addons-956081 minikube.k8s.io/updated_at=2024_02_14T00_19_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:39.983024  504876 ops.go:34] apiserver oom_adj: -16
	I0214 00:19:39.983128  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:40.483270  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:40.983278  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:41.483258  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:41.983277  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:42.483598  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:42.984056  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:43.483256  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:43.983824  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:44.484082  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:44.983256  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:45.483309  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:45.983760  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:46.483792  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:46.983247  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:47.483642  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:47.983995  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:48.483494  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:48.983537  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:49.483705  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:49.984076  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:50.483760  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:50.984161  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:51.483935  504876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:19:51.588356  504876 kubeadm.go:1088] duration metric: took 11.747207104s to wait for elevateKubeSystemPrivileges.
	I0214 00:19:51.588387  504876 kubeadm.go:406] StartCluster complete in 29.236908468s
	I0214 00:19:51.588404  504876 settings.go:142] acquiring lock: {Name:mk6da46f5cb0f714c2fcf3244fbf0dfa768ab578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:51.588511  504876 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 00:19:51.588903  504876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/kubeconfig: {Name:mke09ed5dbaa4240bee61fddd1ec0468d82bdfbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:19:51.590818  504876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 00:19:51.591091  504876 config.go:182] Loaded profile config "addons-956081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 00:19:51.591129  504876 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0214 00:19:51.591196  504876 addons.go:69] Setting yakd=true in profile "addons-956081"
	I0214 00:19:51.591209  504876 addons.go:234] Setting addon yakd=true in "addons-956081"
	I0214 00:19:51.591256  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.591328  504876 addons.go:69] Setting ingress-dns=true in profile "addons-956081"
	I0214 00:19:51.591343  504876 addons.go:234] Setting addon ingress-dns=true in "addons-956081"
	I0214 00:19:51.591381  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.591685  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.591758  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.595548  504876 addons.go:69] Setting cloud-spanner=true in profile "addons-956081"
	I0214 00:19:51.595583  504876 addons.go:234] Setting addon cloud-spanner=true in "addons-956081"
	I0214 00:19:51.595628  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.596052  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.596180  504876 addons.go:69] Setting inspektor-gadget=true in profile "addons-956081"
	I0214 00:19:51.596195  504876 addons.go:234] Setting addon inspektor-gadget=true in "addons-956081"
	I0214 00:19:51.596224  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.596577  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.601780  504876 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-956081"
	I0214 00:19:51.601882  504876 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-956081"
	I0214 00:19:51.601938  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.605318  504876 addons.go:69] Setting metrics-server=true in profile "addons-956081"
	I0214 00:19:51.605350  504876 addons.go:234] Setting addon metrics-server=true in "addons-956081"
	I0214 00:19:51.605438  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.606172  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.606485  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.615663  504876 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-956081"
	I0214 00:19:51.615701  504876 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-956081"
	I0214 00:19:51.615775  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.619845  504876 addons.go:69] Setting default-storageclass=true in profile "addons-956081"
	I0214 00:19:51.619936  504876 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-956081"
	I0214 00:19:51.620641  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.633823  504876 addons.go:69] Setting registry=true in profile "addons-956081"
	I0214 00:19:51.633965  504876 addons.go:234] Setting addon registry=true in "addons-956081"
	I0214 00:19:51.634050  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.634711  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.637516  504876 addons.go:69] Setting storage-provisioner=true in profile "addons-956081"
	I0214 00:19:51.649790  504876 addons.go:234] Setting addon storage-provisioner=true in "addons-956081"
	I0214 00:19:51.650356  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.650114  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.679098  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.633894  504876 addons.go:69] Setting ingress=true in profile "addons-956081"
	I0214 00:19:51.680510  504876 addons.go:234] Setting addon ingress=true in "addons-956081"
	I0214 00:19:51.680566  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.681001  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.650265  504876 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-956081"
	I0214 00:19:51.705773  504876 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-956081"
	I0214 00:19:51.706116  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.650272  504876 addons.go:69] Setting volumesnapshots=true in profile "addons-956081"
	I0214 00:19:51.718780  504876 addons.go:234] Setting addon volumesnapshots=true in "addons-956081"
	I0214 00:19:51.718874  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.719375  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.633884  504876 addons.go:69] Setting gcp-auth=true in profile "addons-956081"
	I0214 00:19:51.733869  504876 mustload.go:65] Loading cluster: addons-956081
	I0214 00:19:51.734107  504876 config.go:182] Loaded profile config "addons-956081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 00:19:51.734391  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.797052  504876 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0214 00:19:51.799454  504876 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0214 00:19:51.799509  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0214 00:19:51.799590  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:51.812188  504876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0214 00:19:51.806404  504876 addons.go:234] Setting addon default-storageclass=true in "addons-956081"
	I0214 00:19:51.818083  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.819901  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.823767  504876 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0214 00:19:51.828330  504876 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0214 00:19:51.828358  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0214 00:19:51.828427  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:51.823959  504876 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0214 00:19:51.823968  504876 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0214 00:19:51.823976  504876 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0214 00:19:51.823980  504876 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0214 00:19:51.823984  504876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0214 00:19:51.843692  504876 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0214 00:19:51.858417  504876 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0214 00:19:51.845398  504876 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0214 00:19:51.871000  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0214 00:19:51.872945  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:51.893011  504876 out.go:177]   - Using image docker.io/registry:2.8.3
	I0214 00:19:51.894942  504876 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0214 00:19:51.894962  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0214 00:19:51.895043  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:51.898375  504876 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0214 00:19:51.905102  504876 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 00:19:51.905119  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0214 00:19:51.905174  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:51.911381  504876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0214 00:19:51.913647  504876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0214 00:19:51.904003  504876 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 00:19:51.904075  504876 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0214 00:19:51.905027  504876 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-956081"
	I0214 00:19:51.916122  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0214 00:19:51.916191  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0214 00:19:51.918388  504876 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0214 00:19:51.918451  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.918524  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:51.920677  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:51.932497  504876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0214 00:19:51.935311  504876 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0214 00:19:51.933659  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:19:51.947902  504876 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0214 00:19:51.954460  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:19:51.937858  504876 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 00:19:51.937846  504876 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0214 00:19:51.963262  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0214 00:19:51.963355  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:51.964827  504876 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0214 00:19:51.974484  504876 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 00:19:51.974504  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0214 00:19:51.974569  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:52.006208  504876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 00:19:52.010055  504876 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 00:19:52.010119  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 00:19:52.010522  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:52.029019  504876 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0214 00:19:52.064852  504876 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0214 00:19:52.064880  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0214 00:19:52.064947  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:52.068174  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.086739  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.093380  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.120249  504876 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 00:19:52.120274  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 00:19:52.120356  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:52.145870  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.165870  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.192094  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.207530  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.219445  504876 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0214 00:19:52.222918  504876 out.go:177]   - Using image docker.io/busybox:stable
	I0214 00:19:52.226593  504876 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 00:19:52.226613  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0214 00:19:52.226675  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:19:52.247297  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.253940  504876 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-956081" context rescaled to 1 replicas
	I0214 00:19:52.254012  504876 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 00:19:52.256428  504876 out.go:177] * Verifying Kubernetes components...
	I0214 00:19:52.254989  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.255654  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.259723  504876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 00:19:52.260603  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	W0214 00:19:52.261629  504876 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0214 00:19:52.261651  504876 retry.go:31] will retry after 239.990182ms: ssh: handshake failed: EOF
	I0214 00:19:52.278926  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.293869  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:19:52.379527  504876 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0214 00:19:52.379597  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0214 00:19:52.418576  504876 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0214 00:19:52.418646  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0214 00:19:52.467796  504876 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0214 00:19:52.467864  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0214 00:19:52.531557  504876 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0214 00:19:52.531625  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0214 00:19:52.553091  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 00:19:52.584323  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0214 00:19:52.609236  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 00:19:52.624521  504876 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0214 00:19:52.624590  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0214 00:19:52.669880  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 00:19:52.673581  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 00:19:52.678951  504876 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0214 00:19:52.679016  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0214 00:19:52.723372  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0214 00:19:52.792383  504876 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0214 00:19:52.792444  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0214 00:19:52.796430  504876 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0214 00:19:52.796497  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0214 00:19:52.805931  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 00:19:52.828626  504876 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0214 00:19:52.828648  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0214 00:19:52.832668  504876 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0214 00:19:52.832732  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0214 00:19:52.835035  504876 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0214 00:19:52.835093  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0214 00:19:52.942256  504876 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0214 00:19:52.942324  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0214 00:19:52.945280  504876 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0214 00:19:52.945334  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0214 00:19:52.956070  504876 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0214 00:19:52.956136  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0214 00:19:52.998849  504876 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0214 00:19:52.998920  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0214 00:19:53.008259  504876 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 00:19:53.008334  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0214 00:19:53.065997  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 00:19:53.102183  504876 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0214 00:19:53.102209  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0214 00:19:53.109814  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0214 00:19:53.114831  504876 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0214 00:19:53.114856  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0214 00:19:53.164851  504876 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0214 00:19:53.164877  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0214 00:19:53.168390  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 00:19:53.286751  504876 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0214 00:19:53.286777  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0214 00:19:53.295284  504876 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0214 00:19:53.295310  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0214 00:19:53.296599  504876 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0214 00:19:53.296615  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0214 00:19:53.419194  504876 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0214 00:19:53.419220  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0214 00:19:53.428314  504876 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 00:19:53.428339  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0214 00:19:53.432450  504876 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0214 00:19:53.432474  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0214 00:19:53.468058  504876 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0214 00:19:53.468078  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0214 00:19:53.474993  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 00:19:53.512291  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0214 00:19:53.525981  504876 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0214 00:19:53.526008  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0214 00:19:53.606574  504876 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0214 00:19:53.606597  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0214 00:19:53.696175  504876 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0214 00:19:53.696203  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0214 00:19:53.785103  504876 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0214 00:19:53.785126  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0214 00:19:53.818193  504876 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 00:19:53.818218  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0214 00:19:53.882284  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 00:19:55.050695  504876 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.044405531s)
	I0214 00:19:55.050818  504876 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0214 00:19:55.050787  504876 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.791039489s)
	I0214 00:19:55.051716  504876 node_ready.go:35] waiting up to 6m0s for node "addons-956081" to be "Ready" ...
	I0214 00:19:55.671281  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.11815554s)
	I0214 00:19:57.197446  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:19:57.434410  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.850032553s)
	I0214 00:19:57.436977  504876 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-956081 service yakd-dashboard -n yakd-dashboard
	
	I0214 00:19:57.434701  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.825397481s)
	I0214 00:19:57.434726  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.764779511s)
	I0214 00:19:57.434768  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.761127445s)
	I0214 00:19:57.434817  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.711381581s)
	I0214 00:19:57.476819  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.670810508s)
	I0214 00:19:58.054740  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.988704502s)
	I0214 00:19:58.055196  504876 addons.go:470] Verifying addon ingress=true in "addons-956081"
	I0214 00:19:58.054854  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.944993144s)
	I0214 00:19:58.055337  504876 addons.go:470] Verifying addon registry=true in "addons-956081"
	I0214 00:19:58.057604  504876 out.go:177] * Verifying registry addon...
	I0214 00:19:58.059716  504876 out.go:177] * Verifying ingress addon...
	I0214 00:19:58.054991  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.579970371s)
	I0214 00:19:58.055036  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.542716932s)
	I0214 00:19:58.054912  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.886496553s)
	W0214 00:19:58.062097  504876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 00:19:58.064470  504876 retry.go:31] will retry after 329.674217ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 00:19:58.062949  504876 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0214 00:19:58.062999  504876 addons.go:470] Verifying addon metrics-server=true in "addons-956081"
	I0214 00:19:58.065194  504876 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0214 00:19:58.078223  504876 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0214 00:19:58.078252  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:19:58.078562  504876 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0214 00:19:58.078579  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:19:58.293694  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.411361686s)
	I0214 00:19:58.293794  504876 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-956081"
	I0214 00:19:58.296250  504876 out.go:177] * Verifying csi-hostpath-driver addon...
	I0214 00:19:58.299835  504876 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0214 00:19:58.309244  504876 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0214 00:19:58.309305  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:19:58.394949  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 00:19:58.580436  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:19:58.587055  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:19:58.812109  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:19:59.112112  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:19:59.141744  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:19:59.314957  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:19:59.601984  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:19:59.610420  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:19:59.611606  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:19:59.826562  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:00.097530  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.702525774s)
	I0214 00:20:00.122484  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:00.165315  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:00.308204  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:00.574728  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:00.576890  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:00.805613  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:00.962411  504876 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0214 00:20:00.962531  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:20:00.995425  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:20:01.071759  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:01.072189  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:01.193520  504876 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0214 00:20:01.246035  504876 addons.go:234] Setting addon gcp-auth=true in "addons-956081"
	I0214 00:20:01.246143  504876 host.go:66] Checking if "addons-956081" exists ...
	I0214 00:20:01.246676  504876 cli_runner.go:164] Run: docker container inspect addons-956081 --format={{.State.Status}}
	I0214 00:20:01.273279  504876 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0214 00:20:01.273335  504876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956081
	I0214 00:20:01.303349  504876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/addons-956081/id_rsa Username:docker}
	I0214 00:20:01.306852  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:01.435416  504876 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0214 00:20:01.437903  504876 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0214 00:20:01.441532  504876 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0214 00:20:01.441610  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0214 00:20:01.504506  504876 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0214 00:20:01.504582  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0214 00:20:01.571712  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:01.578433  504876 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 00:20:01.578495  504876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0214 00:20:01.580782  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:01.632008  504876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 00:20:01.809491  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:02.055628  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:20:02.071853  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:02.073346  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:02.308905  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:02.581669  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:02.587396  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:02.665641  504876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.033598893s)
	I0214 00:20:02.668698  504876 addons.go:470] Verifying addon gcp-auth=true in "addons-956081"
	I0214 00:20:02.670846  504876 out.go:177] * Verifying gcp-auth addon...
	I0214 00:20:02.674035  504876 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0214 00:20:02.702324  504876 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0214 00:20:02.702392  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:02.805517  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:03.072780  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:03.073794  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:03.179526  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:03.305558  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:03.571080  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:03.573170  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:03.681579  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:03.805072  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:04.057611  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:20:04.072129  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:04.073613  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:04.178203  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:04.306646  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:04.570564  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:04.571834  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:04.681929  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:04.804158  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:05.069704  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:05.070665  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:05.178715  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:05.307914  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:05.568722  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:05.570466  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:05.682277  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:05.804242  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:06.070005  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:06.070860  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:06.177610  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:06.304231  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:06.557265  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:20:06.583159  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:06.584895  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:06.684352  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:06.804984  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:07.070679  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:07.071956  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:07.178215  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:07.305585  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:07.569774  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:07.570568  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:07.681370  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:07.804281  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:08.070005  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:08.070987  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:08.177618  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:08.304474  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:08.569323  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:08.569495  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:08.680305  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:08.804716  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:09.055842  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:20:09.069551  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:09.070384  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:09.184910  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:09.304312  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:09.569858  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:09.570511  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:09.681632  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:09.804832  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:10.069973  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:10.070792  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:10.178602  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:10.304426  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:10.569912  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:10.570344  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:10.681656  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:10.805038  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:11.069177  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:11.070659  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:11.177828  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:11.304282  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:11.555729  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:20:11.582049  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:11.583457  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:11.681705  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:11.804681  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:12.071169  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:12.071580  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:12.178326  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:12.305133  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:12.569282  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:12.570270  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:12.681802  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:12.804232  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:13.071534  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:13.072276  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:13.177847  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:13.304721  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:13.569263  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:13.570111  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:13.678680  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:13.805538  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:14.055249  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:20:14.069241  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:14.069944  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:14.177918  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:14.304112  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:14.569869  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:14.570505  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:14.681258  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:14.804013  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:15.069589  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:15.070335  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:15.178283  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:15.304424  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:15.570642  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:15.571598  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:15.681944  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:15.804437  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:16.056105  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:20:16.069893  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:16.070842  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:16.177886  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:16.304856  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:16.570098  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:16.570481  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:16.681363  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:16.804621  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:17.068534  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:17.070624  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:17.177927  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:17.304208  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:17.571184  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:17.571906  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:17.678521  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:17.804825  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:18.056368  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:20:18.070426  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:18.070782  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:18.178159  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:18.304433  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:18.569696  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:18.570116  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:18.677660  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:18.808707  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:19.069392  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:19.069912  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:19.178128  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:19.304771  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:19.569704  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:19.570224  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:19.683418  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:19.804654  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:20.070780  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:20.072254  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:20.177972  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:20.305061  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:20.555913  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:20:20.570232  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:20.570474  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:20.677876  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:20.804561  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:21.069663  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:21.069979  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:21.178665  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:21.304545  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:21.570580  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:21.571488  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:21.680420  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:21.804599  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:22.070162  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:22.071218  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:22.178035  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:22.303908  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:22.568547  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:22.575385  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:22.681067  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:22.804427  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:23.055681  504876 node_ready.go:58] node "addons-956081" has status "Ready":"False"
	I0214 00:20:23.069784  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:23.070451  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:23.177752  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:23.304799  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:23.570125  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:23.570524  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:23.678158  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:23.804457  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:24.070316  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:24.071408  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:24.177610  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:24.305112  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:24.568857  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:24.570554  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:24.681454  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:24.804620  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:25.092558  504876 node_ready.go:49] node "addons-956081" has status "Ready":"True"
	I0214 00:20:25.092592  504876 node_ready.go:38] duration metric: took 30.040817686s waiting for node "addons-956081" to be "Ready" ...
	I0214 00:20:25.092606  504876 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 00:20:25.109912  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:25.134599  504876 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b4tsx" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:25.144470  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:25.191743  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:25.313380  504876 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0214 00:20:25.313413  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:25.573206  504876 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0214 00:20:25.573279  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:25.575788  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:25.680933  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:25.806811  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:26.078930  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:26.083320  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:26.198147  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:26.316611  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:26.573167  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:26.574298  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:26.687177  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:26.806827  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:27.071810  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:27.073336  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:27.143257  504876 pod_ready.go:102] pod "coredns-5dd5756b68-b4tsx" in "kube-system" namespace has status "Ready":"False"
	I0214 00:20:27.179055  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:27.346984  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:27.572703  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:27.573089  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:27.641130  504876 pod_ready.go:92] pod "coredns-5dd5756b68-b4tsx" in "kube-system" namespace has status "Ready":"True"
	I0214 00:20:27.641157  504876 pod_ready.go:81] duration metric: took 2.506514402s waiting for pod "coredns-5dd5756b68-b4tsx" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:27.641176  504876 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-956081" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:27.656863  504876 pod_ready.go:92] pod "etcd-addons-956081" in "kube-system" namespace has status "Ready":"True"
	I0214 00:20:27.656889  504876 pod_ready.go:81] duration metric: took 15.705137ms waiting for pod "etcd-addons-956081" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:27.656911  504876 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-956081" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:27.670236  504876 pod_ready.go:92] pod "kube-apiserver-addons-956081" in "kube-system" namespace has status "Ready":"True"
	I0214 00:20:27.670271  504876 pod_ready.go:81] duration metric: took 13.342784ms waiting for pod "kube-apiserver-addons-956081" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:27.670283  504876 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-956081" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:27.690544  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:27.690699  504876 pod_ready.go:92] pod "kube-controller-manager-addons-956081" in "kube-system" namespace has status "Ready":"True"
	I0214 00:20:27.690715  504876 pod_ready.go:81] duration metric: took 20.423477ms waiting for pod "kube-controller-manager-addons-956081" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:27.690738  504876 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tsn84" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:27.699978  504876 pod_ready.go:92] pod "kube-proxy-tsn84" in "kube-system" namespace has status "Ready":"True"
	I0214 00:20:27.700011  504876 pod_ready.go:81] duration metric: took 9.259968ms waiting for pod "kube-proxy-tsn84" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:27.700023  504876 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-956081" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:27.806602  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:28.039751  504876 pod_ready.go:92] pod "kube-scheduler-addons-956081" in "kube-system" namespace has status "Ready":"True"
	I0214 00:20:28.039780  504876 pod_ready.go:81] duration metric: took 339.748275ms waiting for pod "kube-scheduler-addons-956081" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:28.039812  504876 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-2xwxl" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:28.073681  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:28.075077  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:28.178790  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:28.311370  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:28.572928  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:28.573766  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:28.682828  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:28.807009  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:29.073180  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:29.074703  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:29.178611  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:29.315288  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:29.578724  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:29.580094  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:29.700123  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:29.824009  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:30.067786  504876 pod_ready.go:102] pod "metrics-server-69cf46c98-2xwxl" in "kube-system" namespace has status "Ready":"False"
	I0214 00:20:30.087309  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:30.095410  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:30.182727  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:30.306847  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:30.597822  504876 pod_ready.go:92] pod "metrics-server-69cf46c98-2xwxl" in "kube-system" namespace has status "Ready":"True"
	I0214 00:20:30.597848  504876 pod_ready.go:81] duration metric: took 2.55801527s waiting for pod "metrics-server-69cf46c98-2xwxl" in "kube-system" namespace to be "Ready" ...
	I0214 00:20:30.597885  504876 pod_ready.go:38] duration metric: took 5.505248208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 00:20:30.597905  504876 api_server.go:52] waiting for apiserver process to appear ...
	I0214 00:20:30.597984  504876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 00:20:30.598639  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:30.599537  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:30.659316  504876 api_server.go:72] duration metric: took 38.405261547s to wait for apiserver process to appear ...
	I0214 00:20:30.659343  504876 api_server.go:88] waiting for apiserver healthz status ...
	I0214 00:20:30.659364  504876 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0214 00:20:30.692473  504876 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0214 00:20:30.695542  504876 api_server.go:141] control plane version: v1.28.4
	I0214 00:20:30.695572  504876 api_server.go:131] duration metric: took 36.220495ms to wait for apiserver health ...
	I0214 00:20:30.695582  504876 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 00:20:30.718345  504876 system_pods.go:59] 18 kube-system pods found
	I0214 00:20:30.718382  504876 system_pods.go:61] "coredns-5dd5756b68-b4tsx" [cc634ee3-34b0-449d-b9e5-f7bfa9770c3c] Running
	I0214 00:20:30.718393  504876 system_pods.go:61] "csi-hostpath-attacher-0" [d358f676-7ebb-4344-b388-e2de19ff474b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 00:20:30.718419  504876 system_pods.go:61] "csi-hostpath-resizer-0" [2d9799ba-97d2-4854-928e-aeeb8c5a7e28] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 00:20:30.718435  504876 system_pods.go:61] "csi-hostpathplugin-twwbc" [27056e80-4c24-4aeb-8aa2-780202b4015e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 00:20:30.718442  504876 system_pods.go:61] "etcd-addons-956081" [c94e8656-3c1b-46d8-8648-24175387fb97] Running
	I0214 00:20:30.718453  504876 system_pods.go:61] "kindnet-zqhrf" [353c2c03-3307-43f3-a709-3f8a947aa225] Running
	I0214 00:20:30.718459  504876 system_pods.go:61] "kube-apiserver-addons-956081" [5017d613-a24d-40c8-a4d7-ed244aa2d614] Running
	I0214 00:20:30.718465  504876 system_pods.go:61] "kube-controller-manager-addons-956081" [9ebd9c6d-98ff-4a23-8074-27f7813de5bc] Running
	I0214 00:20:30.718478  504876 system_pods.go:61] "kube-ingress-dns-minikube" [30aa225e-c9bd-41fb-ba09-387e752a9783] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 00:20:30.718497  504876 system_pods.go:61] "kube-proxy-tsn84" [3d32945c-8846-42e8-b34e-1aaf58bea230] Running
	I0214 00:20:30.718506  504876 system_pods.go:61] "kube-scheduler-addons-956081" [e813783a-8a69-4d63-9508-0ff2b0ea2a6b] Running
	I0214 00:20:30.718513  504876 system_pods.go:61] "metrics-server-69cf46c98-2xwxl" [ea037097-4f57-49dc-a932-6fe9c01e7e65] Running
	I0214 00:20:30.718536  504876 system_pods.go:61] "nvidia-device-plugin-daemonset-kw86f" [697d1b46-8c72-42e2-9711-70685bbcb1b3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 00:20:30.718544  504876 system_pods.go:61] "registry-d2bdl" [fda90818-b101-4cff-a2bb-f49e44f3b67a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0214 00:20:30.718559  504876 system_pods.go:61] "registry-proxy-fvbb5" [156c0724-e6f5-4c89-8612-337af2fb9919] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 00:20:30.718567  504876 system_pods.go:61] "snapshot-controller-58dbcc7b99-7dfh6" [eecacf97-1b1f-48b4-b0ac-a3dcaf64c255] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 00:20:30.718579  504876 system_pods.go:61] "snapshot-controller-58dbcc7b99-9ctmk" [43f66f59-d97b-4b70-979b-521b854bf4f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 00:20:30.718585  504876 system_pods.go:61] "storage-provisioner" [5c12c12f-86da-4118-9117-86c0acb8a44e] Running
	I0214 00:20:30.718601  504876 system_pods.go:74] duration metric: took 22.985985ms to wait for pod list to return data ...
	I0214 00:20:30.718617  504876 default_sa.go:34] waiting for default service account to be created ...
	I0214 00:20:30.729056  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:30.807624  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:30.838577  504876 default_sa.go:45] found service account: "default"
	I0214 00:20:30.838603  504876 default_sa.go:55] duration metric: took 119.979728ms for default service account to be created ...
	I0214 00:20:30.838613  504876 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 00:20:31.047967  504876 system_pods.go:86] 18 kube-system pods found
	I0214 00:20:31.048010  504876 system_pods.go:89] "coredns-5dd5756b68-b4tsx" [cc634ee3-34b0-449d-b9e5-f7bfa9770c3c] Running
	I0214 00:20:31.048022  504876 system_pods.go:89] "csi-hostpath-attacher-0" [d358f676-7ebb-4344-b388-e2de19ff474b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 00:20:31.048030  504876 system_pods.go:89] "csi-hostpath-resizer-0" [2d9799ba-97d2-4854-928e-aeeb8c5a7e28] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 00:20:31.048042  504876 system_pods.go:89] "csi-hostpathplugin-twwbc" [27056e80-4c24-4aeb-8aa2-780202b4015e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 00:20:31.048054  504876 system_pods.go:89] "etcd-addons-956081" [c94e8656-3c1b-46d8-8648-24175387fb97] Running
	I0214 00:20:31.048068  504876 system_pods.go:89] "kindnet-zqhrf" [353c2c03-3307-43f3-a709-3f8a947aa225] Running
	I0214 00:20:31.048073  504876 system_pods.go:89] "kube-apiserver-addons-956081" [5017d613-a24d-40c8-a4d7-ed244aa2d614] Running
	I0214 00:20:31.048079  504876 system_pods.go:89] "kube-controller-manager-addons-956081" [9ebd9c6d-98ff-4a23-8074-27f7813de5bc] Running
	I0214 00:20:31.048093  504876 system_pods.go:89] "kube-ingress-dns-minikube" [30aa225e-c9bd-41fb-ba09-387e752a9783] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 00:20:31.048099  504876 system_pods.go:89] "kube-proxy-tsn84" [3d32945c-8846-42e8-b34e-1aaf58bea230] Running
	I0214 00:20:31.048110  504876 system_pods.go:89] "kube-scheduler-addons-956081" [e813783a-8a69-4d63-9508-0ff2b0ea2a6b] Running
	I0214 00:20:31.048115  504876 system_pods.go:89] "metrics-server-69cf46c98-2xwxl" [ea037097-4f57-49dc-a932-6fe9c01e7e65] Running
	I0214 00:20:31.048123  504876 system_pods.go:89] "nvidia-device-plugin-daemonset-kw86f" [697d1b46-8c72-42e2-9711-70685bbcb1b3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 00:20:31.048130  504876 system_pods.go:89] "registry-d2bdl" [fda90818-b101-4cff-a2bb-f49e44f3b67a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0214 00:20:31.048141  504876 system_pods.go:89] "registry-proxy-fvbb5" [156c0724-e6f5-4c89-8612-337af2fb9919] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 00:20:31.048149  504876 system_pods.go:89] "snapshot-controller-58dbcc7b99-7dfh6" [eecacf97-1b1f-48b4-b0ac-a3dcaf64c255] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 00:20:31.048161  504876 system_pods.go:89] "snapshot-controller-58dbcc7b99-9ctmk" [43f66f59-d97b-4b70-979b-521b854bf4f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 00:20:31.048168  504876 system_pods.go:89] "storage-provisioner" [5c12c12f-86da-4118-9117-86c0acb8a44e] Running
	I0214 00:20:31.048181  504876 system_pods.go:126] duration metric: took 209.561168ms to wait for k8s-apps to be running ...
	I0214 00:20:31.048189  504876 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 00:20:31.048255  504876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 00:20:31.067928  504876 system_svc.go:56] duration metric: took 19.730084ms WaitForService to wait for kubelet.
	I0214 00:20:31.067951  504876 kubeadm.go:581] duration metric: took 38.81390168s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0214 00:20:31.067973  504876 node_conditions.go:102] verifying NodePressure condition ...
	I0214 00:20:31.073351  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:31.074242  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:31.178103  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:31.239097  504876 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 00:20:31.239130  504876 node_conditions.go:123] node cpu capacity is 2
	I0214 00:20:31.239144  504876 node_conditions.go:105] duration metric: took 171.165559ms to run NodePressure ...
	I0214 00:20:31.239156  504876 start.go:228] waiting for startup goroutines ...
	I0214 00:20:31.323775  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:31.569472  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:31.570688  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:31.680880  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:31.805372  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:32.071171  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:32.072422  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:32.178418  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:32.308676  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:32.571572  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:32.572542  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:32.690632  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:32.806871  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:33.084406  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:33.087823  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:33.178387  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:33.311314  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:33.571830  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:33.575554  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:33.686370  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:33.807562  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:34.076546  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:34.084008  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:34.189621  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:34.315566  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:34.573450  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:34.574732  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:34.687983  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:34.808013  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:35.071028  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:35.072919  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:35.178359  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:35.305788  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:35.576182  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:35.579347  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:35.681277  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:35.805444  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:36.070232  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:36.071257  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:36.178194  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:36.305938  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:36.570487  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:36.571490  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:36.684787  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:36.806674  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:37.075553  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:37.076479  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:37.179787  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:37.308253  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:37.570167  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:37.572391  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:37.682818  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:37.806595  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:38.070827  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:38.073329  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:38.178356  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:38.306707  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:38.573747  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:38.575080  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:38.679799  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:38.813754  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:39.079083  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:39.082556  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:39.178262  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:39.309526  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:39.572260  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:39.574476  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:39.680509  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:39.812152  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:40.071672  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:40.073068  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:40.179187  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:40.308438  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:40.581037  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:40.582903  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:40.680452  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:40.806713  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:41.078606  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:41.083761  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:41.179691  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:41.307116  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:41.570104  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:41.571090  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:41.680016  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:41.806433  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:42.072608  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:42.073578  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:42.179471  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:42.306798  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:42.570521  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:42.571116  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:42.681793  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:42.805101  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:43.072373  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:43.073659  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:43.178728  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:43.309045  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:43.571422  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:43.572438  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:43.688796  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:43.805900  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:44.071228  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:44.072735  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:44.178267  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:44.305477  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:44.570460  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:44.571521  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:44.682162  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:44.810045  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:45.094723  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:45.108284  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:45.207508  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:45.308685  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:45.571743  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:45.573917  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:45.687973  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:45.809701  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:46.070074  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:46.071525  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:46.178233  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:46.308802  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:46.573328  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:46.573902  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:46.681510  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:46.806085  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:47.070228  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:47.070775  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:47.178419  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:47.306194  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:47.571697  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:47.573995  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:47.684662  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:47.806985  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:48.070345  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:48.072700  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:48.179242  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:48.306701  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:48.571907  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:48.579296  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:48.685638  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:48.806012  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:49.072034  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:49.072600  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:49.178241  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:49.306162  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:49.569821  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:49.571202  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:49.682148  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:49.811954  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:50.070881  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:50.073083  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:50.178825  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:50.307235  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:50.570286  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:50.571508  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:50.687176  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:50.806427  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:51.072227  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:51.077178  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:51.180125  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:51.305654  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:51.570180  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:51.576425  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:51.683521  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:51.806281  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:52.069924  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:52.071357  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:52.178124  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:52.306728  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:52.572013  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:52.572932  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:52.682554  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:52.809084  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:53.070120  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:53.073267  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:53.178964  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:53.309705  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:53.572162  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:53.572616  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:53.691330  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:53.805936  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:54.069808  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:54.070916  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:54.178515  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:54.305508  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:54.568753  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:54.571071  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:54.687436  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:54.805580  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:55.070417  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:55.075607  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:55.178953  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:55.306208  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:55.570350  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:55.571629  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:55.677925  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:55.805490  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:56.072938  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:56.073600  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:56.177774  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:56.305907  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:56.572009  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:56.573906  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:56.688435  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:56.807116  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:57.072363  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:57.075329  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:57.179471  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:57.307386  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:57.571411  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:57.574666  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:57.683574  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:57.806397  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:58.076966  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:58.086120  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:58.178004  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:58.309435  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:58.603616  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:58.604378  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:58.685823  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:58.806410  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:59.072739  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:59.073405  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:59.178454  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:59.306170  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:20:59.570825  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:20:59.571201  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:20:59.681254  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:20:59.806169  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:00.102303  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:21:00.115801  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:00.186489  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:00.308135  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:00.571087  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:00.572005  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:21:00.678221  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:00.807315  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:01.073479  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:01.076588  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:21:01.178413  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:01.308232  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:01.571895  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:01.575247  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:21:01.678610  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:01.806602  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:02.071834  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:21:02.073133  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:02.187939  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:02.315979  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:02.576718  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:02.592367  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:21:02.691443  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:02.810195  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:03.071355  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:21:03.072450  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:03.186677  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:03.306395  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:03.576745  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:21:03.580816  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:03.690952  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:03.805881  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:04.071349  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:04.074368  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:21:04.178144  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:04.316074  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:04.571187  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:04.571982  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 00:21:04.679899  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:04.807910  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:05.069372  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:05.080710  504876 kapi.go:107] duration metric: took 1m7.015510442s to wait for kubernetes.io/minikube-addons=registry ...
	I0214 00:21:05.178593  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:05.306886  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:05.569669  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:05.683948  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:05.806498  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:06.069972  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:06.177483  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:06.305912  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:06.569242  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:06.681437  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:06.806486  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:07.068856  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:07.178309  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:07.306919  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:07.570615  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:07.690275  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:07.828819  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:08.071431  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:08.178534  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:08.309310  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:08.588962  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:08.678554  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:08.806252  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:09.072137  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:09.179486  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:09.307208  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:09.571103  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:09.686407  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:09.808479  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:10.069402  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:10.177890  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:10.305882  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:10.569382  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:10.683913  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:10.806890  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:11.070448  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:11.178416  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:11.312976  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:11.569548  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:11.680540  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:11.805902  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:12.070121  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:12.178521  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:12.305779  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:12.569437  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:12.681959  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:12.805651  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:13.069703  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:13.178533  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:13.308401  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:13.569711  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:13.681100  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:13.806567  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:14.069769  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:14.179245  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:14.306214  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:14.569491  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:14.687551  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 00:21:14.807124  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:15.070500  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:15.178708  504876 kapi.go:107] duration metric: took 1m12.504663026s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0214 00:21:15.181254  504876 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-956081 cluster.
	I0214 00:21:15.183484  504876 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0214 00:21:15.186197  504876 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0214 00:21:15.305667  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:15.570283  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:15.806260  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:16.068957  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:16.305708  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:16.569985  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:16.806290  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:17.069657  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:17.306124  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:17.569057  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:17.808651  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:18.071725  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:18.312012  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:18.570484  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:18.807879  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:19.074504  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:19.307274  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:19.569785  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:19.807227  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:20.072560  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:20.307350  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:20.573464  504876 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 00:21:20.807849  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:21.073646  504876 kapi.go:107] duration metric: took 1m23.010690338s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0214 00:21:21.310626  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:21.808587  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:22.308520  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:22.824571  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:23.305446  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:23.805688  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:24.305523  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:24.806702  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:25.312271  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:25.806323  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:26.307654  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:26.806161  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:27.307091  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:27.806391  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:28.306011  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:28.805526  504876 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 00:21:29.305617  504876 kapi.go:107] duration metric: took 1m31.005779325s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0214 00:21:29.307903  504876 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, yakd, nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner-rancher, inspektor-gadget, metrics-server, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0214 00:21:29.309845  504876 addons.go:505] enable addons completed in 1m37.718707874s: enabled=[default-storageclass storage-provisioner yakd nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner-rancher inspektor-gadget metrics-server volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0214 00:21:29.309895  504876 start.go:233] waiting for cluster config update ...
	I0214 00:21:29.309916  504876 start.go:242] writing updated cluster config ...
	I0214 00:21:29.310214  504876 ssh_runner.go:195] Run: rm -f paused
	I0214 00:21:29.648233  504876 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0214 00:21:29.656489  504876 out.go:177] * Done! kubectl is now configured to use "addons-956081" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 14 00:25:34 addons-956081 crio[864]: time="2024-02-14 00:25:34.961758229Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-wvpcz Namespace:ingress-nginx ID:b6db55a0e1cbbfa5609067e193bf442e1490ce5e0777ffb44e767061f421846e UID:b0e17e53-f2ac-4dd3-972a-387e6bc5d348 NetNS:/var/run/netns/38367921-0ec6-44ab-bc67-30f69d01d722 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 14 00:25:34 addons-956081 crio[864]: time="2024-02-14 00:25:34.961901367Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-wvpcz from CNI network \"kindnet\" (type=ptp)"
	Feb 14 00:25:34 addons-956081 crio[864]: time="2024-02-14 00:25:34.983422837Z" level=info msg="Stopped pod sandbox: b6db55a0e1cbbfa5609067e193bf442e1490ce5e0777ffb44e767061f421846e" id=184b4409-a3e8-4054-8f82-7f31914b78df name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 14 00:25:35 addons-956081 crio[864]: time="2024-02-14 00:25:35.092505445Z" level=info msg="Removing container: 21b2895e3328ca7dc6fba1ca52efc64e3cf2a9905f6d67c374449b2268de9c44" id=e6a48aa1-9573-42b2-a9cb-b5a99e2c6ade name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 14 00:25:35 addons-956081 crio[864]: time="2024-02-14 00:25:35.110308222Z" level=info msg="Removed container 21b2895e3328ca7dc6fba1ca52efc64e3cf2a9905f6d67c374449b2268de9c44: ingress-nginx/ingress-nginx-controller-69cff4fd79-wvpcz/controller" id=e6a48aa1-9573-42b2-a9cb-b5a99e2c6ade name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.268838790Z" level=info msg="Removing container: b040412412bb5b6fa06a9701e4fe2bddb6ac44ec4dc65d080ca79bf4b00c9228" id=a47e00d0-e3f8-4adf-b47c-abe50d1ed191 name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.292832915Z" level=info msg="Removed container b040412412bb5b6fa06a9701e4fe2bddb6ac44ec4dc65d080ca79bf4b00c9228: ingress-nginx/ingress-nginx-admission-patch-qq4dt/patch" id=a47e00d0-e3f8-4adf-b47c-abe50d1ed191 name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.294540489Z" level=info msg="Removing container: 9434b70b4c25dc31a5f96f8a9f526ad421708eb502f5847d66c0de0869eea9e7" id=d5676cc5-dd89-4aef-a282-65019e5afd38 name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.322479317Z" level=info msg="Removed container 9434b70b4c25dc31a5f96f8a9f526ad421708eb502f5847d66c0de0869eea9e7: ingress-nginx/ingress-nginx-admission-create-gmsrj/create" id=d5676cc5-dd89-4aef-a282-65019e5afd38 name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.324350860Z" level=info msg="Stopping pod sandbox: e9c151b229c751a7f854518870e9c489012f8f3e39500b3856990532c1cf3edf" id=3ec49376-8cc2-4e06-9755-dfa3b3a3d710 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.324388481Z" level=info msg="Stopped pod sandbox (already stopped): e9c151b229c751a7f854518870e9c489012f8f3e39500b3856990532c1cf3edf" id=3ec49376-8cc2-4e06-9755-dfa3b3a3d710 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.324952013Z" level=info msg="Removing pod sandbox: e9c151b229c751a7f854518870e9c489012f8f3e39500b3856990532c1cf3edf" id=1e92b07f-fbf3-420e-a7ce-bb91a388d195 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.335173700Z" level=info msg="Removed pod sandbox: e9c151b229c751a7f854518870e9c489012f8f3e39500b3856990532c1cf3edf" id=1e92b07f-fbf3-420e-a7ce-bb91a388d195 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.339559359Z" level=info msg="Stopping pod sandbox: 9cc6e171a264c96e4eee4c54873b166d96ea79bdef13c39f5ac1b307798418f4" id=d2ae4fee-ffc8-4ce0-b550-6299ccfb5768 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.340744442Z" level=info msg="Stopped pod sandbox (already stopped): 9cc6e171a264c96e4eee4c54873b166d96ea79bdef13c39f5ac1b307798418f4" id=d2ae4fee-ffc8-4ce0-b550-6299ccfb5768 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.341297151Z" level=info msg="Removing pod sandbox: 9cc6e171a264c96e4eee4c54873b166d96ea79bdef13c39f5ac1b307798418f4" id=cb41cc09-310d-44ae-9280-9f9c0820dd6c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.350181026Z" level=info msg="Removed pod sandbox: 9cc6e171a264c96e4eee4c54873b166d96ea79bdef13c39f5ac1b307798418f4" id=cb41cc09-310d-44ae-9280-9f9c0820dd6c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.350824435Z" level=info msg="Stopping pod sandbox: b6db55a0e1cbbfa5609067e193bf442e1490ce5e0777ffb44e767061f421846e" id=4986a836-aa4c-4455-b6aa-fc32cf34e607 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.350865411Z" level=info msg="Stopped pod sandbox (already stopped): b6db55a0e1cbbfa5609067e193bf442e1490ce5e0777ffb44e767061f421846e" id=4986a836-aa4c-4455-b6aa-fc32cf34e607 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.351340009Z" level=info msg="Removing pod sandbox: b6db55a0e1cbbfa5609067e193bf442e1490ce5e0777ffb44e767061f421846e" id=11a16316-71ae-4f2b-9679-55aa2d133ed9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.368907809Z" level=info msg="Removed pod sandbox: b6db55a0e1cbbfa5609067e193bf442e1490ce5e0777ffb44e767061f421846e" id=11a16316-71ae-4f2b-9679-55aa2d133ed9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.369447941Z" level=info msg="Stopping pod sandbox: 907a028ba90dfab6a070488f639adb83c4a13f2badd0266febb4b6ca54f1f2e2" id=6be1ef31-b009-4dd4-a70f-d76c460f6df8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.369481409Z" level=info msg="Stopped pod sandbox (already stopped): 907a028ba90dfab6a070488f639adb83c4a13f2badd0266febb4b6ca54f1f2e2" id=6be1ef31-b009-4dd4-a70f-d76c460f6df8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.369986005Z" level=info msg="Removing pod sandbox: 907a028ba90dfab6a070488f639adb83c4a13f2badd0266febb4b6ca54f1f2e2" id=cfbac7a9-eceb-44b5-9e47-0613dff20fa9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 14 00:25:39 addons-956081 crio[864]: time="2024-02-14 00:25:39.378588446Z" level=info msg="Removed pod sandbox: 907a028ba90dfab6a070488f639adb83c4a13f2badd0266febb4b6ca54f1f2e2" id=cfbac7a9-eceb-44b5-9e47-0613dff20fa9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	118b90ea3eff3       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                               8 seconds ago       Exited              hello-world-app           2                   e0795ce2d1421       hello-world-app-5d77478584-v6wg7
	19a12e584ae8f       docker.io/library/nginx@sha256:4fb7e44d1af9cdfbd38c4e951e84d528662fa083fd74f03f13cd797dc7c39bee                2 minutes ago       Running             nginx                     0                   beede09ed6d17       nginx
	9f92d96bac69d       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce          4 minutes ago       Running             headlamp                  0                   6ff8eba44fed9       headlamp-7ddfbb94ff-dlnlc
	fad5320ebdc56       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa   4 minutes ago       Running             gcp-auth                  0                   b09e34e3294f8       gcp-auth-d4c87556c-nhdpd
	b2651f32fff36       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                5 minutes ago       Running             yakd                      0                   4a7f1a435a45b       yakd-dashboard-9947fc6bf-fshgv
	da661fcb9af89       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                               5 minutes ago       Running             coredns                   0                   295138bb16c18       coredns-5dd5756b68-b4tsx
	ba0fc3e332d65       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                               5 minutes ago       Running             storage-provisioner       0                   00833fb39e4bf       storage-provisioner
	554889e86d1ba       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                               5 minutes ago       Running             kube-proxy                0                   3ad78406bec40       kube-proxy-tsn84
	f56ff28ec2899       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                               5 minutes ago       Running             kindnet-cni               0                   7249f012dc743       kindnet-zqhrf
	2d0a936ab1c48       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                               6 minutes ago       Running             kube-apiserver            0                   48348887b00fc       kube-apiserver-addons-956081
	e549885d6c5b8       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                               6 minutes ago       Running             kube-controller-manager   0                   8ab937137ca74       kube-controller-manager-addons-956081
	dcf2ec510729a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                               6 minutes ago       Running             etcd                      0                   f66befdb96871       etcd-addons-956081
	73fc54ee355c1       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                               6 minutes ago       Running             kube-scheduler            0                   70ed39ccee73b       kube-scheduler-addons-956081
	
	
	==> coredns [da661fcb9af8954082ce6bc04ab591eed5e55544e752ebeaf4ac966a422efb48] <==
	[INFO] 10.244.0.20:58371 - 27551 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068857s
	[INFO] 10.244.0.20:45781 - 7229 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002230745s
	[INFO] 10.244.0.20:58371 - 57178 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001169091s
	[INFO] 10.244.0.20:58371 - 15616 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003970334s
	[INFO] 10.244.0.20:45781 - 26458 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004126288s
	[INFO] 10.244.0.20:45781 - 32027 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000138944s
	[INFO] 10.244.0.20:58371 - 44563 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050412s
	[INFO] 10.244.0.20:50346 - 16096 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000111835s
	[INFO] 10.244.0.20:39326 - 45062 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000071598s
	[INFO] 10.244.0.20:50346 - 26569 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067101s
	[INFO] 10.244.0.20:50346 - 25667 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059831s
	[INFO] 10.244.0.20:50346 - 9394 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061127s
	[INFO] 10.244.0.20:50346 - 40556 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000105066s
	[INFO] 10.244.0.20:50346 - 62977 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060947s
	[INFO] 10.244.0.20:39326 - 50368 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000327554s
	[INFO] 10.244.0.20:50346 - 58027 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001444584s
	[INFO] 10.244.0.20:39326 - 60531 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000145574s
	[INFO] 10.244.0.20:39326 - 4951 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000089123s
	[INFO] 10.244.0.20:39326 - 60759 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045152s
	[INFO] 10.244.0.20:39326 - 63679 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038728s
	[INFO] 10.244.0.20:39326 - 52746 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001020251s
	[INFO] 10.244.0.20:50346 - 4316 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001360647s
	[INFO] 10.244.0.20:39326 - 64149 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001065814s
	[INFO] 10.244.0.20:50346 - 29471 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000266845s
	[INFO] 10.244.0.20:39326 - 55329 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000416s
	
	
	==> describe nodes <==
	Name:               addons-956081
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-956081
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802
	                    minikube.k8s.io/name=addons-956081
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T00_19_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-956081
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 00:19:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-956081
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 00:25:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 00:23:13 +0000   Wed, 14 Feb 2024 00:19:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 00:23:13 +0000   Wed, 14 Feb 2024 00:19:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 00:23:13 +0000   Wed, 14 Feb 2024 00:19:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 00:23:13 +0000   Wed, 14 Feb 2024 00:20:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-956081
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2ad636635f24dc9b6da92bbe03aa6ec
	  System UUID:                8d8c5689-32e9-4756-bac9-01402a57fdf9
	  Boot ID:                    abc429c2-787e-4b53-ac30-814ea59b0a0f
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-v6wg7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-d4c87556c-nhdpd                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  headlamp                    headlamp-7ddfbb94ff-dlnlc                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 coredns-5dd5756b68-b4tsx                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m48s
	  kube-system                 etcd-addons-956081                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m1s
	  kube-system                 kindnet-zqhrf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m49s
	  kube-system                 kube-apiserver-addons-956081             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-controller-manager-addons-956081    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-proxy-tsn84                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-scheduler-addons-956081             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-fshgv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m42s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m9s (x8 over 6m9s)  kubelet          Node addons-956081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x8 over 6m9s)  kubelet          Node addons-956081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x8 over 6m9s)  kubelet          Node addons-956081 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m2s                 kubelet          Node addons-956081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s                 kubelet          Node addons-956081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s                 kubelet          Node addons-956081 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m50s                node-controller  Node addons-956081 event: Registered Node addons-956081 in Controller
	  Normal  NodeReady                5m15s                kubelet          Node addons-956081 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001116] FS-Cache: O-key=[8] '9b3a5c0100000000'
	[  +0.000858] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=00000000ecd8046a
	[  +0.001059] FS-Cache: N-key=[8] '9b3a5c0100000000'
	[  +0.006948] FS-Cache: Duplicate cookie detected
	[  +0.000737] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000976] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=00000000fc49a1a1
	[  +0.001218] FS-Cache: O-key=[8] '9b3a5c0100000000'
	[  +0.000729] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000965] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=000000003191040e
	[  +0.001120] FS-Cache: N-key=[8] '9b3a5c0100000000'
	[  +2.747217] FS-Cache: Duplicate cookie detected
	[  +0.000832] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001120] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=00000000f9804413
	[  +0.001151] FS-Cache: O-key=[8] '9a3a5c0100000000'
	[  +0.000805] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001094] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=00000000213fcdb2
	[  +0.001196] FS-Cache: N-key=[8] '9a3a5c0100000000'
	[  +0.363664] FS-Cache: Duplicate cookie detected
	[  +0.000869] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=000000000e1f0c8d
	[  +0.001110] FS-Cache: O-key=[8] 'a03a5c0100000000'
	[  +0.000814] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001000] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=00000000ecd8046a
	[  +0.001130] FS-Cache: N-key=[8] 'a03a5c0100000000'
	
	
	==> etcd [dcf2ec510729a35b29bed0f00fb00c9027843b5e9818d9eee641c2e6bfceb393] <==
	{"level":"info","ts":"2024-02-14T00:19:33.162053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T00:19:33.162112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-02-14T00:19:33.16216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T00:19:33.165861Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T00:19:33.166795Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-956081 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T00:19:33.166875Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T00:19:33.167938Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T00:19:33.17421Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T00:19:33.174813Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T00:19:33.174897Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T00:19:33.185762Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T00:19:33.186729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-14T00:19:33.188241Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T00:19:33.188271Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T00:19:51.82998Z","caller":"traceutil/trace.go:171","msg":"trace[264292535] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"103.492534ms","start":"2024-02-14T00:19:51.726464Z","end":"2024-02-14T00:19:51.829956Z","steps":["trace[264292535] 'process raft request'  (duration: 103.366906ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-14T00:19:56.171235Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.644639ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-14T00:19:56.192461Z","caller":"traceutil/trace.go:171","msg":"trace[1025710820] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:414; }","duration":"122.006823ms","start":"2024-02-14T00:19:56.070436Z","end":"2024-02-14T00:19:56.192442Z","steps":["trace[1025710820] 'agreement among raft nodes before linearized reading'  (duration: 100.362794ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-14T00:19:56.171828Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.402672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-02-14T00:19:56.212617Z","caller":"traceutil/trace.go:171","msg":"trace[379271931] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:414; }","duration":"142.182507ms","start":"2024-02-14T00:19:56.070413Z","end":"2024-02-14T00:19:56.212595Z","steps":["trace[379271931] 'agreement among raft nodes before linearized reading'  (duration: 101.130403ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-14T00:19:56.172292Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.88591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-14T00:19:56.212856Z","caller":"traceutil/trace.go:171","msg":"trace[1672803240] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:0; response_revision:414; }","duration":"142.445037ms","start":"2024-02-14T00:19:56.070397Z","end":"2024-02-14T00:19:56.212842Z","steps":["trace[1672803240] 'agreement among raft nodes before linearized reading'  (duration: 101.557854ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-14T00:19:56.173255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.871247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replication-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-02-14T00:19:56.213054Z","caller":"traceutil/trace.go:171","msg":"trace[1960289570] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replication-controller; range_end:; response_count:1; response_revision:414; }","duration":"142.667082ms","start":"2024-02-14T00:19:56.070373Z","end":"2024-02-14T00:19:56.21304Z","steps":["trace[1960289570] 'agreement among raft nodes before linearized reading'  (duration: 101.941433ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-14T00:19:56.175455Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.109795ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-14T00:19:56.213217Z","caller":"traceutil/trace.go:171","msg":"trace[410306162] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:0; response_revision:414; }","duration":"142.874834ms","start":"2024-02-14T00:19:56.070335Z","end":"2024-02-14T00:19:56.213209Z","steps":["trace[410306162] 'agreement among raft nodes before linearized reading'  (duration: 102.946241ms)"],"step_count":1}
	
	
	==> gcp-auth [fad5320ebdc5622af8625953a5bf5dbd37a6ae3574974c170ee30c64c6b7abe1] <==
	2024/02/14 00:21:13 GCP Auth Webhook started!
	2024/02/14 00:21:31 Ready to marshal response ...
	2024/02/14 00:21:31 Ready to write response ...
	2024/02/14 00:21:31 Ready to marshal response ...
	2024/02/14 00:21:31 Ready to write response ...
	2024/02/14 00:21:31 Ready to marshal response ...
	2024/02/14 00:21:31 Ready to write response ...
	2024/02/14 00:21:41 Ready to marshal response ...
	2024/02/14 00:21:41 Ready to write response ...
	2024/02/14 00:21:49 Ready to marshal response ...
	2024/02/14 00:21:49 Ready to write response ...
	2024/02/14 00:21:49 Ready to marshal response ...
	2024/02/14 00:21:49 Ready to write response ...
	2024/02/14 00:21:57 Ready to marshal response ...
	2024/02/14 00:21:57 Ready to write response ...
	2024/02/14 00:22:05 Ready to marshal response ...
	2024/02/14 00:22:05 Ready to write response ...
	2024/02/14 00:22:36 Ready to marshal response ...
	2024/02/14 00:22:36 Ready to write response ...
	2024/02/14 00:22:53 Ready to marshal response ...
	2024/02/14 00:22:53 Ready to write response ...
	2024/02/14 00:25:14 Ready to marshal response ...
	2024/02/14 00:25:14 Ready to write response ...
	
	
	==> kernel <==
	 00:25:40 up  3:08,  0 users,  load average: 0.60, 1.31, 2.01
	Linux addons-956081 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [f56ff28ec289907ae8e58753c152aa2b4c7f3a9fd1f4c18dfbf8162283385035] <==
	I0214 00:23:35.042155       1 main.go:227] handling current node
	I0214 00:23:45.053321       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:23:45.053354       1 main.go:227] handling current node
	I0214 00:23:55.065040       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:23:55.065071       1 main.go:227] handling current node
	I0214 00:24:05.077235       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:24:05.077271       1 main.go:227] handling current node
	I0214 00:24:15.087906       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:24:15.087955       1 main.go:227] handling current node
	I0214 00:24:25.092313       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:24:25.092459       1 main.go:227] handling current node
	I0214 00:24:35.096919       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:24:35.096949       1 main.go:227] handling current node
	I0214 00:24:45.106257       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:24:45.106292       1 main.go:227] handling current node
	I0214 00:24:55.119156       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:24:55.119184       1 main.go:227] handling current node
	I0214 00:25:05.128682       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:25:05.128712       1 main.go:227] handling current node
	I0214 00:25:15.141700       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:25:15.141752       1 main.go:227] handling current node
	I0214 00:25:25.145602       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:25:25.145637       1 main.go:227] handling current node
	I0214 00:25:35.157933       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:25:35.158086       1 main.go:227] handling current node
	
	
	==> kube-apiserver [2d0a936ab1c4862c00d4ea0c1bd400c429e05324d1d8523472662c04d01b4686] <==
	I0214 00:22:52.536564       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 00:22:53.076814       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0214 00:22:53.410681       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.124.51"}
	W0214 00:22:53.520528       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0214 00:22:53.537771       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0214 00:22:53.557803       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0214 00:22:55.775494       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:23:05.775946       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:23:15.776548       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:23:25.777382       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	I0214 00:23:31.524046       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0214 00:23:35.778622       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:23:45.778983       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:23:55.779913       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:24:05.780828       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:24:15.781876       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:24:25.782817       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:24:35.783664       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:24:45.784129       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:24:55.784966       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:25:05.785943       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I0214 00:25:14.340045       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.65.150"}
	E0214 00:25:15.787173       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:25:25.788129       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0214 00:25:35.789416       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [e549885d6c5b8c0f83a1f3862b50e670ca4b71beac14469352372897bf08c848] <==
	W0214 00:24:08.390628       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 00:24:08.390663       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 00:24:40.829623       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 00:24:40.829657       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 00:24:50.600666       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 00:24:50.600701       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 00:24:52.144992       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 00:24:52.145028       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 00:25:05.033655       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 00:25:05.033690       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0214 00:25:14.103371       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0214 00:25:14.141604       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-v6wg7"
	I0214 00:25:14.177665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.852435ms"
	I0214 00:25:14.207825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="29.996149ms"
	I0214 00:25:14.226962       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="18.995301ms"
	I0214 00:25:14.227144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="52.677µs"
	I0214 00:25:17.061527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="55.121µs"
	I0214 00:25:18.068089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="71.121µs"
	I0214 00:25:19.061476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="79.909µs"
	W0214 00:25:22.804019       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 00:25:22.804053       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0214 00:25:31.761076       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0214 00:25:31.768908       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0214 00:25:31.772908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="5.341µs"
	I0214 00:25:32.101091       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.511µs"
	
	
	==> kube-proxy [554889e86d1ba5d60a9e1d0e384fbddac9aad99f99db50350e0abb325bd3ed05] <==
	I0214 00:19:56.877295       1 server_others.go:69] "Using iptables proxy"
	I0214 00:19:57.115557       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0214 00:19:57.556249       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 00:19:57.574516       1 server_others.go:152] "Using iptables Proxier"
	I0214 00:19:57.574555       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0214 00:19:57.574563       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0214 00:19:57.574594       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0214 00:19:57.574783       1 server.go:846] "Version info" version="v1.28.4"
	I0214 00:19:57.574800       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 00:19:57.577249       1 config.go:188] "Starting service config controller"
	I0214 00:19:57.578972       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0214 00:19:57.579062       1 config.go:97] "Starting endpoint slice config controller"
	I0214 00:19:57.579108       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0214 00:19:57.579603       1 config.go:315] "Starting node config controller"
	I0214 00:19:57.579668       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0214 00:19:57.680654       1 shared_informer.go:318] Caches are synced for node config
	I0214 00:19:57.686563       1 shared_informer.go:318] Caches are synced for service config
	I0214 00:19:57.695833       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [73fc54ee355c14551f6d05a232bbb6d19788adebb026f608aa355506e10365ba] <==
	E0214 00:19:35.770358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 00:19:35.770392       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0214 00:19:35.770406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0214 00:19:35.770452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0214 00:19:35.770485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0214 00:19:36.583152       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 00:19:36.583279       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0214 00:19:36.584699       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0214 00:19:36.584784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0214 00:19:36.672435       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0214 00:19:36.672475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0214 00:19:36.689931       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 00:19:36.689967       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0214 00:19:36.722464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 00:19:36.722501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0214 00:19:36.750673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0214 00:19:36.750782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0214 00:19:36.807487       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0214 00:19:36.807632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0214 00:19:36.813320       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 00:19:36.813436       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0214 00:19:39.843272       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 00:19:52.641956       1 trace.go:236] Trace[1775574780]: "Scheduling" namespace:kube-system,name:coredns-5dd5756b68-zhkvh (14-Feb-2024 00:19:52.473) (total time: 168ms):
	Trace[1775574780]: ---"Computing predicates done" 168ms (00:19:52.641)
	Trace[1775574780]: [168.793201ms] [168.793201ms] END
	
	
	==> kubelet <==
	Feb 14 00:25:35 addons-956081 kubelet[1332]: I0214 00:25:35.179781    1332 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0e17e53-f2ac-4dd3-972a-387e6bc5d348-webhook-cert\") pod \"b0e17e53-f2ac-4dd3-972a-387e6bc5d348\" (UID: \"b0e17e53-f2ac-4dd3-972a-387e6bc5d348\") "
	Feb 14 00:25:35 addons-956081 kubelet[1332]: I0214 00:25:35.179857    1332 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqrpr\" (UniqueName: \"kubernetes.io/projected/b0e17e53-f2ac-4dd3-972a-387e6bc5d348-kube-api-access-kqrpr\") pod \"b0e17e53-f2ac-4dd3-972a-387e6bc5d348\" (UID: \"b0e17e53-f2ac-4dd3-972a-387e6bc5d348\") "
	Feb 14 00:25:35 addons-956081 kubelet[1332]: I0214 00:25:35.182355    1332 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e17e53-f2ac-4dd3-972a-387e6bc5d348-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b0e17e53-f2ac-4dd3-972a-387e6bc5d348" (UID: "b0e17e53-f2ac-4dd3-972a-387e6bc5d348"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 14 00:25:35 addons-956081 kubelet[1332]: I0214 00:25:35.183224    1332 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e17e53-f2ac-4dd3-972a-387e6bc5d348-kube-api-access-kqrpr" (OuterVolumeSpecName: "kube-api-access-kqrpr") pod "b0e17e53-f2ac-4dd3-972a-387e6bc5d348" (UID: "b0e17e53-f2ac-4dd3-972a-387e6bc5d348"). InnerVolumeSpecName "kube-api-access-kqrpr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 14 00:25:35 addons-956081 kubelet[1332]: I0214 00:25:35.280395    1332 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0e17e53-f2ac-4dd3-972a-387e6bc5d348-webhook-cert\") on node \"addons-956081\" DevicePath \"\""
	Feb 14 00:25:35 addons-956081 kubelet[1332]: I0214 00:25:35.280447    1332 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kqrpr\" (UniqueName: \"kubernetes.io/projected/b0e17e53-f2ac-4dd3-972a-387e6bc5d348-kube-api-access-kqrpr\") on node \"addons-956081\" DevicePath \"\""
	Feb 14 00:25:36 addons-956081 kubelet[1332]: I0214 00:25:36.825952    1332 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b0e17e53-f2ac-4dd3-972a-387e6bc5d348" path="/var/lib/kubelet/pods/b0e17e53-f2ac-4dd3-972a-387e6bc5d348/volumes"
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.051033    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2905aa9bf23e9325055def2f7a2113ed70737547b3de3d56eef675475d8e93d2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2905aa9bf23e9325055def2f7a2113ed70737547b3de3d56eef675475d8e93d2/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-ingress-dns-minikube_30aa225e-c9bd-41fb-ba09-387e752a9783/minikube-ingress-dns/5.log" to get inode usage: stat /var/log/pods/kube-system_kube-ingress-dns-minikube_30aa225e-c9bd-41fb-ba09-387e752a9783/minikube-ingress-dns/5.log: no such file or directory
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.056949    1332 manager.go:1106] Failed to create existing container: /crio-a2590a409661484b6524908868e7a06d3d4237bcb5373b735f001529bb1407ec: Error finding container a2590a409661484b6524908868e7a06d3d4237bcb5373b735f001529bb1407ec: Status 404 returned error can't find the container with id a2590a409661484b6524908868e7a06d3d4237bcb5373b735f001529bb1407ec
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.057351    1332 manager.go:1106] Failed to create existing container: /crio-c4d78793c1038726bc0df5f6992cb0ec34f93b7af78eded29efd8ee779f3d5ac: Error finding container c4d78793c1038726bc0df5f6992cb0ec34f93b7af78eded29efd8ee779f3d5ac: Status 404 returned error can't find the container with id c4d78793c1038726bc0df5f6992cb0ec34f93b7af78eded29efd8ee779f3d5ac
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.058589    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/653dd02d4b75974a842d90a1778b2e9624ee23b898e74b22b93c31ff0e00255f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/653dd02d4b75974a842d90a1778b2e9624ee23b898e74b22b93c31ff0e00255f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-controller-69cff4fd79-wvpcz_b0e17e53-f2ac-4dd3-972a-387e6bc5d348/controller/0.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-controller-69cff4fd79-wvpcz_b0e17e53-f2ac-4dd3-972a-387e6bc5d348/controller/0.log: no such file or directory
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.062848    1332 manager.go:1106] Failed to create existing container: /docker/3d3aad31f159bc205228e7ad4b5d677873b031939a0e0b8e43f888db9e6b8036/crio-c4d78793c1038726bc0df5f6992cb0ec34f93b7af78eded29efd8ee779f3d5ac: Error finding container c4d78793c1038726bc0df5f6992cb0ec34f93b7af78eded29efd8ee779f3d5ac: Status 404 returned error can't find the container with id c4d78793c1038726bc0df5f6992cb0ec34f93b7af78eded29efd8ee779f3d5ac
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.067382    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c2ae62418c3608a2ab94b04ef9cf1c01ebe59e10baf8b6c44b15b26758e73a54/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c2ae62418c3608a2ab94b04ef9cf1c01ebe59e10baf8b6c44b15b26758e73a54/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.068519    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5de3359c6e0b69c0af39607d11fc064724847ace269c58e7acb07edc203ccc0a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5de3359c6e0b69c0af39607d11fc064724847ace269c58e7acb07edc203ccc0a/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.070755    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4bc8b9df048812e608269c2bd0c554aa438c1c908493c8dcb504333f638531db/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4bc8b9df048812e608269c2bd0c554aa438c1c908493c8dcb504333f638531db/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.073104    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ed2ce98e49a7d160f3578ad74297883f9b34b3317d3e5548314d48250cf2699e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ed2ce98e49a7d160f3578ad74297883f9b34b3317d3e5548314d48250cf2699e/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.080683    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c2ae62418c3608a2ab94b04ef9cf1c01ebe59e10baf8b6c44b15b26758e73a54/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c2ae62418c3608a2ab94b04ef9cf1c01ebe59e10baf8b6c44b15b26758e73a54/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.083933    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9eb9959246ca1ebf4d4c220751b953411a8c6b23a92b38fe3e705a0e2c1dde1b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9eb9959246ca1ebf4d4c220751b953411a8c6b23a92b38fe3e705a0e2c1dde1b/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.086207    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9eb9959246ca1ebf4d4c220751b953411a8c6b23a92b38fe3e705a0e2c1dde1b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9eb9959246ca1ebf4d4c220751b953411a8c6b23a92b38fe3e705a0e2c1dde1b/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.088469    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/df4767d72ff715fb49ce0933e76886bea15e8a46d91af042191196b950d7a4fc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/df4767d72ff715fb49ce0933e76886bea15e8a46d91af042191196b950d7a4fc/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.088506    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ed2ce98e49a7d160f3578ad74297883f9b34b3317d3e5548314d48250cf2699e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ed2ce98e49a7d160f3578ad74297883f9b34b3317d3e5548314d48250cf2699e/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.094200    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/658d47be827424a5477de0c5f81f6e8f68b234c1cd1b20c5ada954bb513347d3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/658d47be827424a5477de0c5f81f6e8f68b234c1cd1b20c5ada954bb513347d3/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 00:25:39 addons-956081 kubelet[1332]: E0214 00:25:39.136703    1332 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e626704188bff1957466071bd14fe4afb3fe8843489c183a07309f46520deaf0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e626704188bff1957466071bd14fe4afb3fe8843489c183a07309f46520deaf0/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 00:25:39 addons-956081 kubelet[1332]: I0214 00:25:39.267614    1332 scope.go:117] "RemoveContainer" containerID="b040412412bb5b6fa06a9701e4fe2bddb6ac44ec4dc65d080ca79bf4b00c9228"
	Feb 14 00:25:39 addons-956081 kubelet[1332]: I0214 00:25:39.293270    1332 scope.go:117] "RemoveContainer" containerID="9434b70b4c25dc31a5f96f8a9f526ad421708eb502f5847d66c0de0869eea9e7"
	
	
	==> storage-provisioner [ba0fc3e332d65834ba9866023818422304e437ace032427739d2e1fb5c580a76] <==
	I0214 00:20:26.271290       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 00:20:26.358581       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 00:20:26.358660       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 00:20:26.370410       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 00:20:26.372483       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-956081_8b689401-bc54-4b6a-9715-7b012b042467!
	I0214 00:20:26.375581       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"03a13095-75ac-4e66-a0de-695da5f8f936", APIVersion:"v1", ResourceVersion:"913", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-956081_8b689401-bc54-4b6a-9715-7b012b042467 became leader
	I0214 00:20:26.472869       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-956081_8b689401-bc54-4b6a-9715-7b012b042467!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-956081 -n addons-956081
helpers_test.go:261: (dbg) Run:  kubectl --context addons-956081 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (181.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-592927 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-592927 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.129513466s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-592927 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-592927 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f83e806f-9c5e-41c2-8526-4f94709e31fd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f83e806f-9c5e-41c2-8526-4f94709e31fd] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.003867572s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-592927 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0214 00:34:46.235127  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:34:46.240453  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:34:46.250754  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:34:46.271057  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:34:46.311301  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:34:46.391574  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:34:46.551955  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:34:46.872470  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:34:47.513547  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:34:48.794059  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:34:51.354580  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:34:56.475093  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:35:06.715628  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-592927 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.227429019s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-592927 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-592927 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.018663243s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-592927 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-592927 addons disable ingress-dns --alsologtostderr -v=1: (1.99819248s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-592927 addons disable ingress --alsologtostderr -v=1
E0214 00:35:27.195838  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-592927 addons disable ingress --alsologtostderr -v=1: (7.500266796s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-592927
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-592927:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7be116fae23c83d75956cd937b4e9e872f44bfc7c1bcbdb51a7607b9b9695c56",
	        "Created": "2024-02-14T00:31:11.400466717Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 531912,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T00:31:11.717084857Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/7be116fae23c83d75956cd937b4e9e872f44bfc7c1bcbdb51a7607b9b9695c56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7be116fae23c83d75956cd937b4e9e872f44bfc7c1bcbdb51a7607b9b9695c56/hostname",
	        "HostsPath": "/var/lib/docker/containers/7be116fae23c83d75956cd937b4e9e872f44bfc7c1bcbdb51a7607b9b9695c56/hosts",
	        "LogPath": "/var/lib/docker/containers/7be116fae23c83d75956cd937b4e9e872f44bfc7c1bcbdb51a7607b9b9695c56/7be116fae23c83d75956cd937b4e9e872f44bfc7c1bcbdb51a7607b9b9695c56-json.log",
	        "Name": "/ingress-addon-legacy-592927",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-592927:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-592927",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9dc55c16ab41c5f1acc72207f400f8276b3c126a96a52fe7ec3b8a2d86100d9d-init/diff:/var/lib/docker/overlay2/6bce6236d7ba68734b2ab000b848b0bb40e1e541964b0b25c50d016c8f0ef97c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9dc55c16ab41c5f1acc72207f400f8276b3c126a96a52fe7ec3b8a2d86100d9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9dc55c16ab41c5f1acc72207f400f8276b3c126a96a52fe7ec3b8a2d86100d9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9dc55c16ab41c5f1acc72207f400f8276b3c126a96a52fe7ec3b8a2d86100d9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-592927",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-592927/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-592927",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-592927",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-592927",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "00e6520f724d319a82a97a9b257f3bf6eb5f64ed9d7e7fcecc9d15af5cf37aa0",
	            "SandboxKey": "/var/run/docker/netns/00e6520f724d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-592927": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7be116fae23c",
	                        "ingress-addon-legacy-592927"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "d614a634fcb5746078fd9f9c70133adfb08ea84e4b4a991949f690dbfea62eab",
	                    "EndpointID": "8cd8f1fd838a2c97ffe4ade0c4451258cdeb64ea06e5c787a6db2c1ff8f956cf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-592927",
	                        "7be116fae23c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-592927 -n ingress-addon-legacy-592927
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-592927 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-592927 logs -n 25: (1.307774756s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-526497                                                   | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2481303429/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-526497                                                   | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2481303429/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-526497 ssh findmnt                                          | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-526497 ssh findmnt                                          | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-526497 ssh findmnt                                          | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-526497 ssh findmnt                                          | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-526497                                                   | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-526497                                                      | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-526497                                                      | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-526497                                                      | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-526497                                                      | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-526497                                                      | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-526497 ssh pgrep                                            | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-526497                                                      | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-526497                                                      | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-526497 image build -t                                       | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	|                | localhost/my-image:functional-526497                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-526497 image ls                                             | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	| delete         | -p functional-526497                                                   | functional-526497           | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:30 UTC |
	| start          | -p ingress-addon-legacy-592927                                         | ingress-addon-legacy-592927 | jenkins | v1.32.0 | 14 Feb 24 00:30 UTC | 14 Feb 24 00:32 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-592927                                            | ingress-addon-legacy-592927 | jenkins | v1.32.0 | 14 Feb 24 00:32 UTC | 14 Feb 24 00:32 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-592927                                            | ingress-addon-legacy-592927 | jenkins | v1.32.0 | 14 Feb 24 00:32 UTC | 14 Feb 24 00:32 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-592927                                            | ingress-addon-legacy-592927 | jenkins | v1.32.0 | 14 Feb 24 00:32 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-592927 ip                                         | ingress-addon-legacy-592927 | jenkins | v1.32.0 | 14 Feb 24 00:35 UTC | 14 Feb 24 00:35 UTC |
	| addons         | ingress-addon-legacy-592927                                            | ingress-addon-legacy-592927 | jenkins | v1.32.0 | 14 Feb 24 00:35 UTC | 14 Feb 24 00:35 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-592927                                            | ingress-addon-legacy-592927 | jenkins | v1.32.0 | 14 Feb 24 00:35 UTC | 14 Feb 24 00:35 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 00:30:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 00:30:42.584132  531457 out.go:291] Setting OutFile to fd 1 ...
	I0214 00:30:42.584262  531457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:30:42.584272  531457 out.go:304] Setting ErrFile to fd 2...
	I0214 00:30:42.584278  531457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:30:42.584522  531457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 00:30:42.584955  531457 out.go:298] Setting JSON to false
	I0214 00:30:42.585831  531457 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11586,"bootTime":1707859057,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0214 00:30:42.585906  531457 start.go:138] virtualization:  
	I0214 00:30:42.588784  531457 out.go:177] * [ingress-addon-legacy-592927] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 00:30:42.591333  531457 out.go:177]   - MINIKUBE_LOCATION=18169
	I0214 00:30:42.591499  531457 notify.go:220] Checking for updates...
	I0214 00:30:42.595868  531457 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 00:30:42.598336  531457 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 00:30:42.600336  531457 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	I0214 00:30:42.602332  531457 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 00:30:42.604198  531457 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 00:30:42.605999  531457 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 00:30:42.626788  531457 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 00:30:42.626922  531457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:30:42.690469  531457 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-14 00:30:42.680860515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:30:42.690583  531457 docker.go:295] overlay module found
	I0214 00:30:42.694358  531457 out.go:177] * Using the docker driver based on user configuration
	I0214 00:30:42.696532  531457 start.go:298] selected driver: docker
	I0214 00:30:42.696552  531457 start.go:902] validating driver "docker" against <nil>
	I0214 00:30:42.696567  531457 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 00:30:42.697210  531457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:30:42.756279  531457 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-14 00:30:42.747568071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:30:42.756436  531457 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 00:30:42.756684  531457 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 00:30:42.758758  531457 out.go:177] * Using Docker driver with root privileges
	I0214 00:30:42.760655  531457 cni.go:84] Creating CNI manager for ""
	I0214 00:30:42.760681  531457 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 00:30:42.760695  531457 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 00:30:42.760721  531457 start_flags.go:321] config:
	{Name:ingress-addon-legacy-592927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-592927 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 00:30:42.763036  531457 out.go:177] * Starting control plane node ingress-addon-legacy-592927 in cluster ingress-addon-legacy-592927
	I0214 00:30:42.765168  531457 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 00:30:42.767228  531457 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 00:30:42.769150  531457 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0214 00:30:42.769231  531457 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 00:30:42.784265  531457 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0214 00:30:42.784289  531457 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0214 00:30:42.829839  531457 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0214 00:30:42.829865  531457 cache.go:56] Caching tarball of preloaded images
	I0214 00:30:42.830043  531457 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0214 00:30:42.832404  531457 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0214 00:30:42.834333  531457 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:30:42.967236  531457 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0214 00:31:03.423995  531457 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:31:03.424103  531457 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:31:04.613214  531457 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0214 00:31:04.613585  531457 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/config.json ...
	I0214 00:31:04.613622  531457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/config.json: {Name:mk3b45dcde6845ebda76aa8d43aace53493e15b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:31:04.613856  531457 cache.go:194] Successfully downloaded all kic artifacts
	I0214 00:31:04.613894  531457 start.go:365] acquiring machines lock for ingress-addon-legacy-592927: {Name:mkd5f9805a8a8ce20163845b459b42f12165437c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 00:31:04.613974  531457 start.go:369] acquired machines lock for "ingress-addon-legacy-592927" in 63.146µs
	I0214 00:31:04.614001  531457 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-592927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-592927 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 00:31:04.614073  531457 start.go:125] createHost starting for "" (driver="docker")
	I0214 00:31:04.616312  531457 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0214 00:31:04.616536  531457 start.go:159] libmachine.API.Create for "ingress-addon-legacy-592927" (driver="docker")
	I0214 00:31:04.616570  531457 client.go:168] LocalClient.Create starting
	I0214 00:31:04.616629  531457 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem
	I0214 00:31:04.616666  531457 main.go:141] libmachine: Decoding PEM data...
	I0214 00:31:04.616687  531457 main.go:141] libmachine: Parsing certificate...
	I0214 00:31:04.616752  531457 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem
	I0214 00:31:04.616775  531457 main.go:141] libmachine: Decoding PEM data...
	I0214 00:31:04.616792  531457 main.go:141] libmachine: Parsing certificate...
	I0214 00:31:04.617166  531457 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-592927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0214 00:31:04.632319  531457 cli_runner.go:211] docker network inspect ingress-addon-legacy-592927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0214 00:31:04.632410  531457 network_create.go:281] running [docker network inspect ingress-addon-legacy-592927] to gather additional debugging logs...
	I0214 00:31:04.632431  531457 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-592927
	W0214 00:31:04.647375  531457 cli_runner.go:211] docker network inspect ingress-addon-legacy-592927 returned with exit code 1
	I0214 00:31:04.647412  531457 network_create.go:284] error running [docker network inspect ingress-addon-legacy-592927]: docker network inspect ingress-addon-legacy-592927: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-592927 not found
	I0214 00:31:04.647427  531457 network_create.go:286] output of [docker network inspect ingress-addon-legacy-592927]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-592927 not found
	
	** /stderr **
	I0214 00:31:04.647561  531457 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 00:31:04.662949  531457 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000489540}
	I0214 00:31:04.662995  531457 network_create.go:124] attempt to create docker network ingress-addon-legacy-592927 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0214 00:31:04.663053  531457 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-592927 ingress-addon-legacy-592927
	I0214 00:31:04.725124  531457 network_create.go:108] docker network ingress-addon-legacy-592927 192.168.49.0/24 created
	I0214 00:31:04.725158  531457 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-592927" container
	I0214 00:31:04.725239  531457 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0214 00:31:04.740348  531457 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-592927 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-592927 --label created_by.minikube.sigs.k8s.io=true
	I0214 00:31:04.756959  531457 oci.go:103] Successfully created a docker volume ingress-addon-legacy-592927
	I0214 00:31:04.757045  531457 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-592927-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-592927 --entrypoint /usr/bin/test -v ingress-addon-legacy-592927:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0214 00:31:06.302595  531457 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-592927-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-592927 --entrypoint /usr/bin/test -v ingress-addon-legacy-592927:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.545508762s)
	I0214 00:31:06.302626  531457 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-592927
	I0214 00:31:06.302645  531457 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0214 00:31:06.302665  531457 kic.go:194] Starting extracting preloaded images to volume ...
	I0214 00:31:06.302752  531457 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-592927:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0214 00:31:11.334608  531457 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-592927:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.031814219s)
	I0214 00:31:11.334641  531457 kic.go:203] duration metric: took 5.031973 seconds to extract preloaded images to volume
	W0214 00:31:11.334796  531457 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0214 00:31:11.334908  531457 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0214 00:31:11.386825  531457 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-592927 --name ingress-addon-legacy-592927 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-592927 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-592927 --network ingress-addon-legacy-592927 --ip 192.168.49.2 --volume ingress-addon-legacy-592927:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0214 00:31:11.726016  531457 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592927 --format={{.State.Running}}
	I0214 00:31:11.750944  531457 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592927 --format={{.State.Status}}
	I0214 00:31:11.774809  531457 cli_runner.go:164] Run: docker exec ingress-addon-legacy-592927 stat /var/lib/dpkg/alternatives/iptables
	I0214 00:31:11.839907  531457 oci.go:144] the created container "ingress-addon-legacy-592927" has a running status.
	I0214 00:31:11.839939  531457 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18169-498689/.minikube/machines/ingress-addon-legacy-592927/id_rsa...
	I0214 00:31:12.187919  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/machines/ingress-addon-legacy-592927/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0214 00:31:12.187967  531457 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18169-498689/.minikube/machines/ingress-addon-legacy-592927/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0214 00:31:12.211841  531457 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592927 --format={{.State.Status}}
	I0214 00:31:12.234998  531457 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0214 00:31:12.235030  531457 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-592927 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0214 00:31:12.315109  531457 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592927 --format={{.State.Status}}
	I0214 00:31:12.341949  531457 machine.go:88] provisioning docker machine ...
	I0214 00:31:12.341984  531457 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-592927"
	I0214 00:31:12.342065  531457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592927
	I0214 00:31:12.366937  531457 main.go:141] libmachine: Using SSH client type: native
	I0214 00:31:12.367384  531457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33407 <nil> <nil>}
	I0214 00:31:12.367405  531457 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-592927 && echo "ingress-addon-legacy-592927" | sudo tee /etc/hostname
	I0214 00:31:12.368076  531457 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47802->127.0.0.1:33407: read: connection reset by peer
	I0214 00:31:15.510143  531457 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-592927
	
	I0214 00:31:15.510269  531457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592927
	I0214 00:31:15.526586  531457 main.go:141] libmachine: Using SSH client type: native
	I0214 00:31:15.527004  531457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33407 <nil> <nil>}
	I0214 00:31:15.527028  531457 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-592927' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-592927/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-592927' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 00:31:15.661953  531457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 00:31:15.661982  531457 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18169-498689/.minikube CaCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18169-498689/.minikube}
	I0214 00:31:15.662012  531457 ubuntu.go:177] setting up certificates
	I0214 00:31:15.662023  531457 provision.go:83] configureAuth start
	I0214 00:31:15.662080  531457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-592927
	I0214 00:31:15.680110  531457 provision.go:138] copyHostCerts
	I0214 00:31:15.680157  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18169-498689/.minikube/ca.pem
	I0214 00:31:15.680190  531457 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-498689/.minikube/ca.pem, removing ...
	I0214 00:31:15.680204  531457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-498689/.minikube/ca.pem
	I0214 00:31:15.680282  531457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/ca.pem (1078 bytes)
	I0214 00:31:15.680398  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18169-498689/.minikube/cert.pem
	I0214 00:31:15.680422  531457 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-498689/.minikube/cert.pem, removing ...
	I0214 00:31:15.680427  531457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-498689/.minikube/cert.pem
	I0214 00:31:15.680455  531457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/cert.pem (1123 bytes)
	I0214 00:31:15.680494  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18169-498689/.minikube/key.pem
	I0214 00:31:15.680515  531457 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-498689/.minikube/key.pem, removing ...
	I0214 00:31:15.680519  531457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-498689/.minikube/key.pem
	I0214 00:31:15.680542  531457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/key.pem (1675 bytes)
	I0214 00:31:15.680589  531457 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-592927 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-592927]
	I0214 00:31:16.179398  531457 provision.go:172] copyRemoteCerts
	I0214 00:31:16.179473  531457 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 00:31:16.179521  531457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592927
	I0214 00:31:16.194774  531457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/ingress-addon-legacy-592927/id_rsa Username:docker}
	I0214 00:31:16.291326  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0214 00:31:16.291390  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 00:31:16.315465  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0214 00:31:16.315523  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0214 00:31:16.338520  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0214 00:31:16.338588  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0214 00:31:16.362210  531457 provision.go:86] duration metric: configureAuth took 700.173612ms
	I0214 00:31:16.362238  531457 ubuntu.go:193] setting minikube options for container-runtime
	I0214 00:31:16.362458  531457 config.go:182] Loaded profile config "ingress-addon-legacy-592927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0214 00:31:16.362573  531457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592927
	I0214 00:31:16.377955  531457 main.go:141] libmachine: Using SSH client type: native
	I0214 00:31:16.378398  531457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33407 <nil> <nil>}
	I0214 00:31:16.378419  531457 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 00:31:16.634331  531457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 00:31:16.634356  531457 machine.go:91] provisioned docker machine in 4.292381946s
	I0214 00:31:16.634367  531457 client.go:171] LocalClient.Create took 12.017787022s
	I0214 00:31:16.634380  531457 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-592927" took 12.017844162s
	I0214 00:31:16.634394  531457 start.go:300] post-start starting for "ingress-addon-legacy-592927" (driver="docker")
	I0214 00:31:16.634405  531457 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 00:31:16.634476  531457 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 00:31:16.634534  531457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592927
	I0214 00:31:16.652327  531457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/ingress-addon-legacy-592927/id_rsa Username:docker}
	I0214 00:31:16.746615  531457 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 00:31:16.749560  531457 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 00:31:16.749596  531457 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 00:31:16.749608  531457 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 00:31:16.749619  531457 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 00:31:16.749634  531457 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-498689/.minikube/addons for local assets ...
	I0214 00:31:16.749700  531457 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-498689/.minikube/files for local assets ...
	I0214 00:31:16.749840  531457 filesync.go:149] local asset: /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem -> 5040612.pem in /etc/ssl/certs
	I0214 00:31:16.749852  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem -> /etc/ssl/certs/5040612.pem
	I0214 00:31:16.749968  531457 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 00:31:16.758287  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem --> /etc/ssl/certs/5040612.pem (1708 bytes)
	I0214 00:31:16.781381  531457 start.go:303] post-start completed in 146.972222ms
	I0214 00:31:16.781780  531457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-592927
	I0214 00:31:16.797303  531457 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/config.json ...
	I0214 00:31:16.797575  531457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 00:31:16.797616  531457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592927
	I0214 00:31:16.812756  531457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/ingress-addon-legacy-592927/id_rsa Username:docker}
	I0214 00:31:16.903269  531457 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 00:31:16.907232  531457 start.go:128] duration metric: createHost completed in 12.293132309s
	I0214 00:31:16.907254  531457 start.go:83] releasing machines lock for "ingress-addon-legacy-592927", held for 12.29326382s
	I0214 00:31:16.907327  531457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-592927
	I0214 00:31:16.926393  531457 ssh_runner.go:195] Run: cat /version.json
	I0214 00:31:16.926447  531457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592927
	I0214 00:31:16.926447  531457 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 00:31:16.926527  531457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592927
	I0214 00:31:16.944518  531457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/ingress-addon-legacy-592927/id_rsa Username:docker}
	I0214 00:31:16.954044  531457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/ingress-addon-legacy-592927/id_rsa Username:docker}
	I0214 00:31:17.037353  531457 ssh_runner.go:195] Run: systemctl --version
	I0214 00:31:17.169257  531457 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 00:31:17.311853  531457 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 00:31:17.315997  531457 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 00:31:17.336376  531457 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0214 00:31:17.336449  531457 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 00:31:17.371597  531457 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0214 00:31:17.371618  531457 start.go:475] detecting cgroup driver to use...
	I0214 00:31:17.371651  531457 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 00:31:17.371706  531457 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 00:31:17.388728  531457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 00:31:17.400525  531457 docker.go:217] disabling cri-docker service (if available) ...
	I0214 00:31:17.400632  531457 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 00:31:17.415422  531457 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 00:31:17.429593  531457 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 00:31:17.521779  531457 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 00:31:17.632916  531457 docker.go:233] disabling docker service ...
	I0214 00:31:17.632999  531457 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 00:31:17.654464  531457 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 00:31:17.667305  531457 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 00:31:17.762073  531457 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 00:31:17.847823  531457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 00:31:17.860113  531457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 00:31:17.878169  531457 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0214 00:31:17.878266  531457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 00:31:17.888210  531457 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 00:31:17.888289  531457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 00:31:17.897890  531457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 00:31:17.908139  531457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 00:31:17.917442  531457 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 00:31:17.926134  531457 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 00:31:17.934204  531457 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 00:31:17.942811  531457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 00:31:18.030693  531457 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 00:31:18.157382  531457 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 00:31:18.157503  531457 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 00:31:18.161337  531457 start.go:543] Will wait 60s for crictl version
	I0214 00:31:18.161440  531457 ssh_runner.go:195] Run: which crictl
	I0214 00:31:18.165004  531457 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 00:31:18.204288  531457 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0214 00:31:18.204414  531457 ssh_runner.go:195] Run: crio --version
	I0214 00:31:18.243576  531457 ssh_runner.go:195] Run: crio --version
	I0214 00:31:18.282719  531457 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0214 00:31:18.284834  531457 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-592927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 00:31:18.300612  531457 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0214 00:31:18.304037  531457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 00:31:18.314827  531457 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0214 00:31:18.314911  531457 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 00:31:18.364890  531457 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0214 00:31:18.364962  531457 ssh_runner.go:195] Run: which lz4
	I0214 00:31:18.368367  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0214 00:31:18.368479  531457 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0214 00:31:18.371687  531457 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 00:31:18.371720  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0214 00:31:20.454021  531457 crio.go:444] Took 2.085579 seconds to copy over tarball
	I0214 00:31:20.454122  531457 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 00:31:23.210339  531457 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.756184703s)
	I0214 00:31:23.210407  531457 crio.go:451] Took 2.756357 seconds to extract the tarball
	I0214 00:31:23.210426  531457 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 00:31:23.573718  531457 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 00:31:23.609189  531457 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0214 00:31:23.609216  531457 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0214 00:31:23.609319  531457 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0214 00:31:23.609529  531457 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0214 00:31:23.609607  531457 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 00:31:23.609688  531457 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0214 00:31:23.609838  531457 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0214 00:31:23.609925  531457 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0214 00:31:23.610009  531457 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0214 00:31:23.610200  531457 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 00:31:23.610815  531457 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0214 00:31:23.611197  531457 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0214 00:31:23.611351  531457 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0214 00:31:23.611486  531457 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 00:31:23.611606  531457 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0214 00:31:23.611661  531457 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0214 00:31:23.611591  531457 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0214 00:31:23.612297  531457 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 00:31:23.954008  531457 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0214 00:31:23.972524  531457 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0214 00:31:23.972798  531457 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0214 00:31:23.978974  531457 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0214 00:31:23.979166  531457 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0214 00:31:23.994595  531457 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0214 00:31:23.994764  531457 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0214 00:31:23.995929  531457 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0214 00:31:23.996083  531457 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0214 00:31:24.019459  531457 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0214 00:31:24.019515  531457 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0214 00:31:24.019564  531457 ssh_runner.go:195] Run: which crictl
	W0214 00:31:24.022352  531457 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0214 00:31:24.022526  531457 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0214 00:31:24.044047  531457 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0214 00:31:24.044226  531457 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0214 00:31:24.125586  531457 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0214 00:31:24.125631  531457 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0214 00:31:24.125686  531457 ssh_runner.go:195] Run: which crictl
	I0214 00:31:24.125797  531457 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0214 00:31:24.125818  531457 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0214 00:31:24.125844  531457 ssh_runner.go:195] Run: which crictl
	I0214 00:31:24.142138  531457 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0214 00:31:24.142185  531457 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0214 00:31:24.142234  531457 ssh_runner.go:195] Run: which crictl
	I0214 00:31:24.156541  531457 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0214 00:31:24.156585  531457 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0214 00:31:24.156640  531457 ssh_runner.go:195] Run: which crictl
	I0214 00:31:24.156719  531457 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	W0214 00:31:24.193311  531457 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0214 00:31:24.193482  531457 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 00:31:24.219256  531457 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0214 00:31:24.219367  531457 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 00:31:24.219485  531457 ssh_runner.go:195] Run: which crictl
	I0214 00:31:24.219649  531457 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0214 00:31:24.219707  531457 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0214 00:31:24.219793  531457 ssh_runner.go:195] Run: which crictl
	I0214 00:31:24.219967  531457 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0214 00:31:24.220088  531457 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0214 00:31:24.220215  531457 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0214 00:31:24.220578  531457 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0214 00:31:24.225467  531457 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0214 00:31:24.370195  531457 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0214 00:31:24.370293  531457 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 00:31:24.370346  531457 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0214 00:31:24.370399  531457 ssh_runner.go:195] Run: which crictl
	I0214 00:31:24.370440  531457 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 00:31:24.370520  531457 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0214 00:31:24.370581  531457 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0214 00:31:24.370673  531457 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0214 00:31:24.370704  531457 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0214 00:31:24.425623  531457 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0214 00:31:24.425699  531457 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 00:31:24.425785  531457 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0214 00:31:24.480062  531457 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0214 00:31:24.480142  531457 cache_images.go:92] LoadImages completed in 870.912294ms
	W0214 00:31:24.480208  531457 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18169-498689/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0214 00:31:24.480281  531457 ssh_runner.go:195] Run: crio config
	I0214 00:31:24.536490  531457 cni.go:84] Creating CNI manager for ""
	I0214 00:31:24.536518  531457 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 00:31:24.536536  531457 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 00:31:24.536555  531457 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-592927 NodeName:ingress-addon-legacy-592927 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0214 00:31:24.536686  531457 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-592927"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 00:31:24.536764  531457 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-592927 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-592927 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0214 00:31:24.536838  531457 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0214 00:31:24.545582  531457 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 00:31:24.545658  531457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 00:31:24.554239  531457 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0214 00:31:24.572496  531457 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0214 00:31:24.590657  531457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0214 00:31:24.609116  531457 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0214 00:31:24.612531  531457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 00:31:24.623417  531457 certs.go:56] Setting up /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927 for IP: 192.168.49.2
	I0214 00:31:24.623464  531457 certs.go:190] acquiring lock for shared ca certs: {Name:mk24bda5a01a6d67ca318fbbda66875cef4a1a9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:31:24.623637  531457 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18169-498689/.minikube/ca.key
	I0214 00:31:24.623690  531457 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.key
	I0214 00:31:24.623749  531457 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.key
	I0214 00:31:24.623766  531457 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt with IP's: []
	I0214 00:31:24.995255  531457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt ...
	I0214 00:31:24.995288  531457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: {Name:mkdd38aff390a48c86e8e6487ba6d4b1b61ee8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:31:24.995506  531457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.key ...
	I0214 00:31:24.995523  531457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.key: {Name:mk48ed196e2dd2a0b9b94bb75e10deb6cb54ba93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:31:24.995606  531457 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.key.dd3b5fb2
	I0214 00:31:24.995622  531457 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0214 00:31:25.890468  531457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.crt.dd3b5fb2 ...
	I0214 00:31:25.890501  531457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.crt.dd3b5fb2: {Name:mk6406420a78de7ac424df53e61f6df43c50481f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:31:25.890708  531457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.key.dd3b5fb2 ...
	I0214 00:31:25.890724  531457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.key.dd3b5fb2: {Name:mkfb9cca2b7bd22bbefa9b46b865ab075439979a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:31:25.890809  531457 certs.go:337] copying /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.crt
	I0214 00:31:25.890901  531457 certs.go:341] copying /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.key
	I0214 00:31:25.890964  531457 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/proxy-client.key
	I0214 00:31:25.890984  531457 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/proxy-client.crt with IP's: []
	I0214 00:31:26.281206  531457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/proxy-client.crt ...
	I0214 00:31:26.281240  531457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/proxy-client.crt: {Name:mk60143d167461ff2cba4d958ac98c91a78503e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:31:26.281429  531457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/proxy-client.key ...
	I0214 00:31:26.281444  531457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/proxy-client.key: {Name:mked868cdb7305ee2c530f8e2d3548842313aa94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:31:26.281533  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0214 00:31:26.281554  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0214 00:31:26.281566  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0214 00:31:26.281580  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0214 00:31:26.281591  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0214 00:31:26.281608  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0214 00:31:26.281624  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0214 00:31:26.281639  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0214 00:31:26.281688  531457 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/504061.pem (1338 bytes)
	W0214 00:31:26.281757  531457 certs.go:433] ignoring /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/504061_empty.pem, impossibly tiny 0 bytes
	I0214 00:31:26.281772  531457 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem (1679 bytes)
	I0214 00:31:26.281808  531457 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem (1078 bytes)
	I0214 00:31:26.281841  531457 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem (1123 bytes)
	I0214 00:31:26.281872  531457 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem (1675 bytes)
	I0214 00:31:26.281920  531457 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem (1708 bytes)
	I0214 00:31:26.281953  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem -> /usr/share/ca-certificates/5040612.pem
	I0214 00:31:26.281969  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0214 00:31:26.281985  531457 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/504061.pem -> /usr/share/ca-certificates/504061.pem
	I0214 00:31:26.282668  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 00:31:26.307416  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 00:31:26.331739  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 00:31:26.356322  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 00:31:26.380714  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 00:31:26.404688  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 00:31:26.428690  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 00:31:26.452588  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0214 00:31:26.476543  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem --> /usr/share/ca-certificates/5040612.pem (1708 bytes)
	I0214 00:31:26.500706  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 00:31:26.525112  531457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/certs/504061.pem --> /usr/share/ca-certificates/504061.pem (1338 bytes)
	I0214 00:31:26.549231  531457 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 00:31:26.567007  531457 ssh_runner.go:195] Run: openssl version
	I0214 00:31:26.572404  531457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040612.pem && ln -fs /usr/share/ca-certificates/5040612.pem /etc/ssl/certs/5040612.pem"
	I0214 00:31:26.581850  531457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5040612.pem
	I0214 00:31:26.585078  531457 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 00:26 /usr/share/ca-certificates/5040612.pem
	I0214 00:31:26.585143  531457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040612.pem
	I0214 00:31:26.592048  531457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5040612.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 00:31:26.601354  531457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 00:31:26.610374  531457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 00:31:26.614060  531457 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 00:19 /usr/share/ca-certificates/minikubeCA.pem
	I0214 00:31:26.614125  531457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 00:31:26.621090  531457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 00:31:26.630681  531457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/504061.pem && ln -fs /usr/share/ca-certificates/504061.pem /etc/ssl/certs/504061.pem"
	I0214 00:31:26.640143  531457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/504061.pem
	I0214 00:31:26.643618  531457 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 00:26 /usr/share/ca-certificates/504061.pem
	I0214 00:31:26.643682  531457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/504061.pem
	I0214 00:31:26.650673  531457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/504061.pem /etc/ssl/certs/51391683.0"
	I0214 00:31:26.660286  531457 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 00:31:26.663524  531457 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0214 00:31:26.663598  531457 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-592927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-592927 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 00:31:26.663681  531457 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 00:31:26.663776  531457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 00:31:26.702644  531457 cri.go:89] found id: ""
	I0214 00:31:26.702712  531457 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 00:31:26.711593  531457 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 00:31:26.720340  531457 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0214 00:31:26.720419  531457 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 00:31:26.729544  531457 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 00:31:26.729594  531457 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0214 00:31:26.785224  531457 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0214 00:31:26.785305  531457 kubeadm.go:322] [preflight] Running pre-flight checks
	I0214 00:31:26.832563  531457 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0214 00:31:26.832663  531457 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0214 00:31:26.832720  531457 kubeadm.go:322] OS: Linux
	I0214 00:31:26.832796  531457 kubeadm.go:322] CGROUPS_CPU: enabled
	I0214 00:31:26.832860  531457 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0214 00:31:26.832937  531457 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0214 00:31:26.833010  531457 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0214 00:31:26.833085  531457 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0214 00:31:26.833154  531457 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0214 00:31:26.916410  531457 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 00:31:26.916696  531457 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 00:31:26.916843  531457 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 00:31:27.143952  531457 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 00:31:27.145499  531457 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 00:31:27.145691  531457 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0214 00:31:27.242255  531457 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 00:31:27.246851  531457 out.go:204]   - Generating certificates and keys ...
	I0214 00:31:27.246991  531457 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0214 00:31:27.247093  531457 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0214 00:31:27.440055  531457 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 00:31:27.737621  531457 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0214 00:31:28.521279  531457 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0214 00:31:29.236625  531457 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0214 00:31:29.978577  531457 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0214 00:31:29.978878  531457 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-592927 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 00:31:30.623979  531457 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0214 00:31:30.624331  531457 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-592927 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 00:31:31.015158  531457 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 00:31:31.160573  531457 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 00:31:32.283080  531457 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0214 00:31:32.283381  531457 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 00:31:33.066692  531457 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 00:31:33.496298  531457 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 00:31:34.051378  531457 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 00:31:34.872666  531457 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 00:31:34.875772  531457 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 00:31:34.877871  531457 out.go:204]   - Booting up control plane ...
	I0214 00:31:34.877978  531457 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 00:31:34.882637  531457 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 00:31:34.891411  531457 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 00:31:34.892195  531457 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 00:31:34.894511  531457 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 00:31:47.899649  531457 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.005200 seconds
	I0214 00:31:47.899791  531457 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 00:31:47.912688  531457 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 00:31:48.437290  531457 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 00:31:48.437441  531457 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-592927 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0214 00:31:48.950284  531457 kubeadm.go:322] [bootstrap-token] Using token: qn9w4j.q0m5i3c8onsjhpc6
	I0214 00:31:48.952314  531457 out.go:204]   - Configuring RBAC rules ...
	I0214 00:31:48.952436  531457 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 00:31:48.956681  531457 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 00:31:48.966509  531457 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 00:31:48.969831  531457 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 00:31:48.972577  531457 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 00:31:48.976445  531457 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 00:31:48.985537  531457 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 00:31:49.257061  531457 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0214 00:31:49.365967  531457 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0214 00:31:49.367368  531457 kubeadm.go:322] 
	I0214 00:31:49.367441  531457 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0214 00:31:49.367453  531457 kubeadm.go:322] 
	I0214 00:31:49.367525  531457 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0214 00:31:49.367534  531457 kubeadm.go:322] 
	I0214 00:31:49.367558  531457 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0214 00:31:49.367617  531457 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 00:31:49.367669  531457 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 00:31:49.367676  531457 kubeadm.go:322] 
	I0214 00:31:49.367730  531457 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0214 00:31:49.367804  531457 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 00:31:49.367871  531457 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 00:31:49.367883  531457 kubeadm.go:322] 
	I0214 00:31:49.367961  531457 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 00:31:49.368036  531457 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0214 00:31:49.368045  531457 kubeadm.go:322] 
	I0214 00:31:49.368124  531457 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qn9w4j.q0m5i3c8onsjhpc6 \
	I0214 00:31:49.368226  531457 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:44f9d2d2d797c45382846d4b51b4e7b005961554b46257e185c55dad3bb0bd1d \
	I0214 00:31:49.368252  531457 kubeadm.go:322]     --control-plane 
	I0214 00:31:49.368259  531457 kubeadm.go:322] 
	I0214 00:31:49.368339  531457 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0214 00:31:49.368347  531457 kubeadm.go:322] 
	I0214 00:31:49.368423  531457 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qn9w4j.q0m5i3c8onsjhpc6 \
	I0214 00:31:49.368524  531457 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:44f9d2d2d797c45382846d4b51b4e7b005961554b46257e185c55dad3bb0bd1d 
	I0214 00:31:49.371751  531457 kubeadm.go:322] W0214 00:31:26.784567    1227 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0214 00:31:49.371969  531457 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0214 00:31:49.372074  531457 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 00:31:49.372202  531457 kubeadm.go:322] W0214 00:31:34.889848    1227 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0214 00:31:49.372324  531457 kubeadm.go:322] W0214 00:31:34.891270    1227 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0214 00:31:49.372343  531457 cni.go:84] Creating CNI manager for ""
	I0214 00:31:49.372351  531457 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 00:31:49.374619  531457 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 00:31:49.376399  531457 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 00:31:49.380079  531457 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0214 00:31:49.380098  531457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0214 00:31:49.398648  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 00:31:49.827176  531457 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 00:31:49.827252  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:49.827309  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802 minikube.k8s.io/name=ingress-addon-legacy-592927 minikube.k8s.io/updated_at=2024_02_14T00_31_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:49.843544  531457 ops.go:34] apiserver oom_adj: -16
	I0214 00:31:49.969079  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:50.469826  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:50.969204  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:51.469214  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:51.969367  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:52.469239  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:52.969218  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:53.469150  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:53.969906  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:54.469858  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:54.969813  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:55.469947  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:55.970070  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:56.469812  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:56.969924  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:57.469457  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:57.969781  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:58.469499  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:58.969220  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:59.469222  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:31:59.969523  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:32:00.470241  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:32:00.969190  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:32:01.469201  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:32:01.969502  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:32:02.469192  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:32:02.969363  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:32:03.469832  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:32:03.969669  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:32:04.469176  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:32:04.969608  531457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 00:32:05.105513  531457 kubeadm.go:1088] duration metric: took 15.278320355s to wait for elevateKubeSystemPrivileges.
	I0214 00:32:05.105546  531457 kubeadm.go:406] StartCluster complete in 38.441952727s
	I0214 00:32:05.105566  531457 settings.go:142] acquiring lock: {Name:mk6da46f5cb0f714c2fcf3244fbf0dfa768ab578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:32:05.105635  531457 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 00:32:05.106387  531457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/kubeconfig: {Name:mke09ed5dbaa4240bee61fddd1ec0468d82bdfbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:32:05.107164  531457 kapi.go:59] client config for ingress-addon-legacy-592927: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.key", CAFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 00:32:05.108223  531457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 00:32:05.108484  531457 config.go:182] Loaded profile config "ingress-addon-legacy-592927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0214 00:32:05.108514  531457 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0214 00:32:05.108586  531457 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-592927"
	I0214 00:32:05.108599  531457 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-592927"
	I0214 00:32:05.108643  531457 host.go:66] Checking if "ingress-addon-legacy-592927" exists ...
	I0214 00:32:05.109086  531457 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592927 --format={{.State.Status}}
	I0214 00:32:05.109763  531457 cert_rotation.go:137] Starting client certificate rotation controller
	I0214 00:32:05.109799  531457 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-592927"
	I0214 00:32:05.109813  531457 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-592927"
	I0214 00:32:05.110104  531457 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592927 --format={{.State.Status}}
	I0214 00:32:05.166625  531457 kapi.go:59] client config for ingress-addon-legacy-592927: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.key", CAFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 00:32:05.166923  531457 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-592927"
	I0214 00:32:05.166956  531457 host.go:66] Checking if "ingress-addon-legacy-592927" exists ...
	I0214 00:32:05.167438  531457 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592927 --format={{.State.Status}}
	I0214 00:32:05.175677  531457 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 00:32:05.178473  531457 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 00:32:05.178498  531457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 00:32:05.178571  531457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592927
	I0214 00:32:05.193923  531457 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 00:32:05.193945  531457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 00:32:05.194008  531457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592927
	I0214 00:32:05.224956  531457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/ingress-addon-legacy-592927/id_rsa Username:docker}
	I0214 00:32:05.239861  531457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/ingress-addon-legacy-592927/id_rsa Username:docker}
	I0214 00:32:05.319753  531457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 00:32:05.425894  531457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 00:32:05.455187  531457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 00:32:05.681058  531457 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-592927" context rescaled to 1 replicas
	I0214 00:32:05.681105  531457 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 00:32:05.683451  531457 out.go:177] * Verifying Kubernetes components...
	I0214 00:32:05.685525  531457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 00:32:05.811572  531457 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0214 00:32:05.920429  531457 kapi.go:59] client config for ingress-addon-legacy-592927: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.key", CAFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 00:32:05.920722  531457 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-592927" to be "Ready" ...
	I0214 00:32:05.947166  531457 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0214 00:32:05.949355  531457 addons.go:505] enable addons completed in 840.829341ms: enabled=[storage-provisioner default-storageclass]
	I0214 00:32:07.924633  531457 node_ready.go:58] node "ingress-addon-legacy-592927" has status "Ready":"False"
	I0214 00:32:10.424246  531457 node_ready.go:58] node "ingress-addon-legacy-592927" has status "Ready":"False"
	I0214 00:32:12.924432  531457 node_ready.go:49] node "ingress-addon-legacy-592927" has status "Ready":"True"
	I0214 00:32:12.924460  531457 node_ready.go:38] duration metric: took 7.003722302s waiting for node "ingress-addon-legacy-592927" to be "Ready" ...
	I0214 00:32:12.924475  531457 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 00:32:12.931945  531457 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-2xhbc" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:14.935250  531457 pod_ready.go:102] pod "coredns-66bff467f8-2xhbc" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-14 00:32:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0214 00:32:16.937523  531457 pod_ready.go:102] pod "coredns-66bff467f8-2xhbc" in "kube-system" namespace has status "Ready":"False"
	I0214 00:32:18.938186  531457 pod_ready.go:102] pod "coredns-66bff467f8-2xhbc" in "kube-system" namespace has status "Ready":"False"
	I0214 00:32:20.438346  531457 pod_ready.go:92] pod "coredns-66bff467f8-2xhbc" in "kube-system" namespace has status "Ready":"True"
	I0214 00:32:20.438376  531457 pod_ready.go:81] duration metric: took 7.506393194s waiting for pod "coredns-66bff467f8-2xhbc" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:20.438388  531457 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-592927" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:20.443271  531457 pod_ready.go:92] pod "etcd-ingress-addon-legacy-592927" in "kube-system" namespace has status "Ready":"True"
	I0214 00:32:20.443295  531457 pod_ready.go:81] duration metric: took 4.899576ms waiting for pod "etcd-ingress-addon-legacy-592927" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:20.443310  531457 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-592927" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:20.448346  531457 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-592927" in "kube-system" namespace has status "Ready":"True"
	I0214 00:32:20.448376  531457 pod_ready.go:81] duration metric: took 5.057007ms waiting for pod "kube-apiserver-ingress-addon-legacy-592927" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:20.448389  531457 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-592927" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:20.453105  531457 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-592927" in "kube-system" namespace has status "Ready":"True"
	I0214 00:32:20.453133  531457 pod_ready.go:81] duration metric: took 4.734711ms waiting for pod "kube-controller-manager-ingress-addon-legacy-592927" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:20.453144  531457 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wwt2t" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:20.457681  531457 pod_ready.go:92] pod "kube-proxy-wwt2t" in "kube-system" namespace has status "Ready":"True"
	I0214 00:32:20.457707  531457 pod_ready.go:81] duration metric: took 4.555431ms waiting for pod "kube-proxy-wwt2t" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:20.457738  531457 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-592927" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:20.634146  531457 request.go:629] Waited for 176.343912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-592927
	I0214 00:32:20.833218  531457 request.go:629] Waited for 196.231232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-592927
	I0214 00:32:20.836064  531457 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-592927" in "kube-system" namespace has status "Ready":"True"
	I0214 00:32:20.836086  531457 pod_ready.go:81] duration metric: took 378.338493ms waiting for pod "kube-scheduler-ingress-addon-legacy-592927" in "kube-system" namespace to be "Ready" ...
	I0214 00:32:20.836102  531457 pod_ready.go:38] duration metric: took 7.911612379s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 00:32:20.836117  531457 api_server.go:52] waiting for apiserver process to appear ...
	I0214 00:32:20.836183  531457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 00:32:20.846908  531457 api_server.go:72] duration metric: took 15.1657716s to wait for apiserver process to appear ...
	I0214 00:32:20.846934  531457 api_server.go:88] waiting for apiserver healthz status ...
	I0214 00:32:20.846954  531457 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0214 00:32:20.855565  531457 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0214 00:32:20.856437  531457 api_server.go:141] control plane version: v1.18.20
	I0214 00:32:20.856461  531457 api_server.go:131] duration metric: took 9.519573ms to wait for apiserver health ...
	I0214 00:32:20.856470  531457 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 00:32:21.033879  531457 request.go:629] Waited for 177.308024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0214 00:32:21.039972  531457 system_pods.go:59] 8 kube-system pods found
	I0214 00:32:21.040013  531457 system_pods.go:61] "coredns-66bff467f8-2xhbc" [4c418a5f-b69c-4e78-a89f-c3de7f96bafb] Running
	I0214 00:32:21.040023  531457 system_pods.go:61] "etcd-ingress-addon-legacy-592927" [104fabb4-db3e-4474-9d6b-05ca42dd70a0] Running
	I0214 00:32:21.040029  531457 system_pods.go:61] "kindnet-4kvnk" [bd88d0ba-36f8-41c1-9cb0-6f1822ac6dc6] Running
	I0214 00:32:21.040035  531457 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-592927" [41b6ca58-4398-4fb1-922f-171bc195c0c1] Running
	I0214 00:32:21.040044  531457 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-592927" [4dfa43a4-47e2-4a06-8104-c153b6476da6] Running
	I0214 00:32:21.040049  531457 system_pods.go:61] "kube-proxy-wwt2t" [910c8f0d-4a77-4271-9154-c389768364eb] Running
	I0214 00:32:21.040060  531457 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-592927" [3510ca2d-b4c8-49c4-ad62-05372b97b5c0] Running
	I0214 00:32:21.040070  531457 system_pods.go:61] "storage-provisioner" [fbad9afc-715c-49b5-8fdd-f9b9883a604b] Running
	I0214 00:32:21.040082  531457 system_pods.go:74] duration metric: took 183.590269ms to wait for pod list to return data ...
	I0214 00:32:21.040095  531457 default_sa.go:34] waiting for default service account to be created ...
	I0214 00:32:21.233488  531457 request.go:629] Waited for 193.316192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0214 00:32:21.236026  531457 default_sa.go:45] found service account: "default"
	I0214 00:32:21.236054  531457 default_sa.go:55] duration metric: took 195.9529ms for default service account to be created ...
	I0214 00:32:21.236065  531457 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 00:32:21.433480  531457 request.go:629] Waited for 197.339105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0214 00:32:21.439682  531457 system_pods.go:86] 8 kube-system pods found
	I0214 00:32:21.439711  531457 system_pods.go:89] "coredns-66bff467f8-2xhbc" [4c418a5f-b69c-4e78-a89f-c3de7f96bafb] Running
	I0214 00:32:21.439719  531457 system_pods.go:89] "etcd-ingress-addon-legacy-592927" [104fabb4-db3e-4474-9d6b-05ca42dd70a0] Running
	I0214 00:32:21.439724  531457 system_pods.go:89] "kindnet-4kvnk" [bd88d0ba-36f8-41c1-9cb0-6f1822ac6dc6] Running
	I0214 00:32:21.439729  531457 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-592927" [41b6ca58-4398-4fb1-922f-171bc195c0c1] Running
	I0214 00:32:21.439735  531457 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-592927" [4dfa43a4-47e2-4a06-8104-c153b6476da6] Running
	I0214 00:32:21.439759  531457 system_pods.go:89] "kube-proxy-wwt2t" [910c8f0d-4a77-4271-9154-c389768364eb] Running
	I0214 00:32:21.439769  531457 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-592927" [3510ca2d-b4c8-49c4-ad62-05372b97b5c0] Running
	I0214 00:32:21.439775  531457 system_pods.go:89] "storage-provisioner" [fbad9afc-715c-49b5-8fdd-f9b9883a604b] Running
	I0214 00:32:21.439788  531457 system_pods.go:126] duration metric: took 203.71868ms to wait for k8s-apps to be running ...
	I0214 00:32:21.439797  531457 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 00:32:21.439872  531457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 00:32:21.451582  531457 system_svc.go:56] duration metric: took 11.773949ms WaitForService to wait for kubelet.
	I0214 00:32:21.451609  531457 kubeadm.go:581] duration metric: took 15.770479308s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0214 00:32:21.451629  531457 node_conditions.go:102] verifying NodePressure condition ...
	I0214 00:32:21.634022  531457 request.go:629] Waited for 182.3158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0214 00:32:21.637077  531457 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 00:32:21.637108  531457 node_conditions.go:123] node cpu capacity is 2
	I0214 00:32:21.637121  531457 node_conditions.go:105] duration metric: took 185.486372ms to run NodePressure ...
	I0214 00:32:21.637150  531457 start.go:228] waiting for startup goroutines ...
	I0214 00:32:21.637164  531457 start.go:233] waiting for cluster config update ...
	I0214 00:32:21.637174  531457 start.go:242] writing updated cluster config ...
	I0214 00:32:21.637456  531457 ssh_runner.go:195] Run: rm -f paused
	I0214 00:32:21.700247  531457 start.go:600] kubectl: 1.29.1, cluster: 1.18.20 (minor skew: 11)
	I0214 00:32:21.702613  531457 out.go:177] 
	W0214 00:32:21.704709  531457 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0214 00:32:21.706491  531457 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0214 00:32:21.708163  531457 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-592927" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.447016657Z" level=info msg="Stopped container 1c43fc7de880d3804f529b4f1fbda46c14d594103e02265383e2fa723eeaef81: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dxl66/controller" id=38faaf79-6851-4c40-9d41-d47abe3238b6 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.449907435Z" level=info msg="Stopped container 1c43fc7de880d3804f529b4f1fbda46c14d594103e02265383e2fa723eeaef81: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dxl66/controller" id=913b40ff-4fd7-4aba-9c6a-878112bb2c03 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.450150239Z" level=info msg="Stopping pod sandbox: 27194634af792bec5210a86708c056269c66992f4a8bbab48e6e134caa38b938" id=53fae2f0-b5dc-4426-b74a-4c058263dd2f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.453266025Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-WXYVNKCOTERNMOCL - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-HC66LTS6X4ILUV3Q - [0:0]\n-X KUBE-HP-WXYVNKCOTERNMOCL\n-X KUBE-HP-HC66LTS6X4ILUV3Q\nCOMMIT\n"
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.461932944Z" level=info msg="Stopping pod sandbox: 27194634af792bec5210a86708c056269c66992f4a8bbab48e6e134caa38b938" id=f7da2a3a-eead-4244-b849-ecf0e63e1cb4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.462563799Z" level=info msg="Closing host port tcp:80"
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.462606687Z" level=info msg="Closing host port tcp:443"
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.463846071Z" level=info msg="Host port tcp:80 does not have an open socket"
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.463873870Z" level=info msg="Host port tcp:443 does not have an open socket"
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.464033352Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-dxl66 Namespace:ingress-nginx ID:27194634af792bec5210a86708c056269c66992f4a8bbab48e6e134caa38b938 UID:97868945-a9ac-4970-a866-91154a29ff77 NetNS:/var/run/netns/10218e23-7b65-476d-9a28-f6e294a19076 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.464165790Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-dxl66 from CNI network \"kindnet\" (type=ptp)"
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.495254849Z" level=info msg="Stopped pod sandbox: 27194634af792bec5210a86708c056269c66992f4a8bbab48e6e134caa38b938" id=53fae2f0-b5dc-4426-b74a-4c058263dd2f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.495370967Z" level=info msg="Stopped pod sandbox (already stopped): 27194634af792bec5210a86708c056269c66992f4a8bbab48e6e134caa38b938" id=f7da2a3a-eead-4244-b849-ecf0e63e1cb4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.675754582Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=f573eeef-259b-4d6e-9294-e2854315f59d name=/runtime.v1alpha2.ImageService/ImageStatus
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.675996361Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=f573eeef-259b-4d6e-9294-e2854315f59d name=/runtime.v1alpha2.ImageService/ImageStatus
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.677013618Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=e1de1f00-edb1-4125-9a4e-794cbb23e0e2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.677215036Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=e1de1f00-edb1-4125-9a4e-794cbb23e0e2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.678367651Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-2j6xf/hello-world-app" id=b3cf4e2a-c898-4557-90de-1698a0496f55 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.678476213Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.749841672Z" level=info msg="Created container d71968b5a8320d2548a1eb12d61895f7405d2f13bf63951dbf1efa86b6292c23: default/hello-world-app-5f5d8b66bb-2j6xf/hello-world-app" id=b3cf4e2a-c898-4557-90de-1698a0496f55 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.751560700Z" level=info msg="Starting container: d71968b5a8320d2548a1eb12d61895f7405d2f13bf63951dbf1efa86b6292c23" id=7ecf7116-ed45-41f9-a333-4f5e5dc7c695 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Feb 14 00:35:28 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:28.761185028Z" level=info msg="Started container" PID=3743 containerID=d71968b5a8320d2548a1eb12d61895f7405d2f13bf63951dbf1efa86b6292c23 description=default/hello-world-app-5f5d8b66bb-2j6xf/hello-world-app id=7ecf7116-ed45-41f9-a333-4f5e5dc7c695 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=27ff512082b34a100f69fbb9fe89be8929a75090fc824ff1af5b0c4976f703c5
	Feb 14 00:35:28 ingress-addon-legacy-592927 conmon[3732]: conmon d71968b5a8320d2548a1 <ninfo>: container 3743 exited with status 1
	Feb 14 00:35:29 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:29.404773005Z" level=info msg="Removing container: 0e226dbe1301bf18ef21ad80b308ee5efeb3fab9127a36a667ce441951f76df0" id=c9f473c5-582a-40ea-be87-a2201db747ef name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Feb 14 00:35:29 ingress-addon-legacy-592927 crio[893]: time="2024-02-14 00:35:29.426769186Z" level=info msg="Removed container 0e226dbe1301bf18ef21ad80b308ee5efeb3fab9127a36a667ce441951f76df0: default/hello-world-app-5f5d8b66bb-2j6xf/hello-world-app" id=c9f473c5-582a-40ea-be87-a2201db747ef name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d71968b5a8320       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   5 seconds ago       Exited              hello-world-app           2                   27ff512082b34       hello-world-app-5f5d8b66bb-2j6xf
	1a1a2aa30be1a       docker.io/library/nginx@sha256:4fb7e44d1af9cdfbd38c4e951e84d528662fa083fd74f03f13cd797dc7c39bee                    2 minutes ago       Running             nginx                     0                   efb25600191ec       nginx
	1c43fc7de880d       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   27194634af792       ingress-nginx-controller-7fcf777cb7-dxl66
	e887ab68f9617       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   1479e467f1a8c       ingress-nginx-admission-patch-q8l9p
	b1f0d300bd0a1       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   3ec9676b3296c       ingress-nginx-admission-create-69z42
	fb1c6622e5ee6       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   6f143d0523809       storage-provisioner
	9b1421d448953       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   1f549ae4ef50a       coredns-66bff467f8-2xhbc
	a7b5ec97b8299       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   9faea857784fa       kindnet-4kvnk
	f1fe20d8f6178       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   e47de8b1700e3       kube-proxy-wwt2t
	b7ec385930f82       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   2168d11c290cf       kube-apiserver-ingress-addon-legacy-592927
	e6f2cf72e3f1f       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   c4284abad7a46       kube-controller-manager-ingress-addon-legacy-592927
	a9fd47450ca5d       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   113f0d385a87b       kube-scheduler-ingress-addon-legacy-592927
	25a2a92d13c4f       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   9b3f2b0645a91       etcd-ingress-addon-legacy-592927
	
	
	==> coredns [9b1421d4489539d429742adfa5cf1c4e5201095410e97fe3878c01891286b4ea] <==
	[INFO] 10.244.0.5:53773 - 35676 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033863s
	[INFO] 10.244.0.5:53773 - 15009 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002238164s
	[INFO] 10.244.0.5:56588 - 47176 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002137463s
	[INFO] 10.244.0.5:53773 - 1856 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001634417s
	[INFO] 10.244.0.5:56588 - 53901 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001225912s
	[INFO] 10.244.0.5:53773 - 55971 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000135154s
	[INFO] 10.244.0.5:56588 - 50751 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000040509s
	[INFO] 10.244.0.5:43753 - 19763 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000084758s
	[INFO] 10.244.0.5:36495 - 18589 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042108s
	[INFO] 10.244.0.5:43753 - 6635 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000096582s
	[INFO] 10.244.0.5:43753 - 15755 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033428s
	[INFO] 10.244.0.5:43753 - 53866 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034896s
	[INFO] 10.244.0.5:43753 - 11822 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032033s
	[INFO] 10.244.0.5:43753 - 40871 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030769s
	[INFO] 10.244.0.5:36495 - 63886 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047827s
	[INFO] 10.244.0.5:36495 - 45532 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038539s
	[INFO] 10.244.0.5:36495 - 19773 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031179s
	[INFO] 10.244.0.5:43753 - 5005 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001230449s
	[INFO] 10.244.0.5:36495 - 35190 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000346123s
	[INFO] 10.244.0.5:36495 - 44381 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045185s
	[INFO] 10.244.0.5:43753 - 12574 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00130907s
	[INFO] 10.244.0.5:43753 - 35154 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052808s
	[INFO] 10.244.0.5:36495 - 27655 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00150248s
	[INFO] 10.244.0.5:36495 - 30957 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001038254s
	[INFO] 10.244.0.5:36495 - 39830 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047417s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-592927
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-592927
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802
	                    minikube.k8s.io/name=ingress-addon-legacy-592927
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T00_31_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 00:31:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-592927
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 00:35:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 00:35:22 +0000   Wed, 14 Feb 2024 00:31:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 00:35:22 +0000   Wed, 14 Feb 2024 00:31:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 00:35:22 +0000   Wed, 14 Feb 2024 00:31:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 00:35:22 +0000   Wed, 14 Feb 2024 00:32:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-592927
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d13b800081b40228084e1b644a1f69f
	  System UUID:                b23bda1e-833b-4f42-aee5-76cea2679f21
	  Boot ID:                    abc429c2-787e-4b53-ac30-814ea59b0a0f
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-2j6xf                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-2xhbc                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m30s
	  kube-system                 etcd-ingress-addon-legacy-592927                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kindnet-4kvnk                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m29s
	  kube-system                 kube-apiserver-ingress-addon-legacy-592927             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-592927    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-proxy-wwt2t                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-scheduler-ingress-addon-legacy-592927             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m57s (x5 over 3m57s)  kubelet     Node ingress-addon-legacy-592927 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x4 over 3m57s)  kubelet     Node ingress-addon-legacy-592927 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x4 over 3m57s)  kubelet     Node ingress-addon-legacy-592927 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m42s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m42s                  kubelet     Node ingress-addon-legacy-592927 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s                  kubelet     Node ingress-addon-legacy-592927 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s                  kubelet     Node ingress-addon-legacy-592927 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m29s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m22s                  kubelet     Node ingress-addon-legacy-592927 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001181] FS-Cache: O-key=[8] '523c5c0100000000'
	[  +0.000715] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000973] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=0000000004512b57
	[  +0.001098] FS-Cache: N-key=[8] '523c5c0100000000'
	[  +0.009444] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=000000001ec5b948
	[  +0.001090] FS-Cache: O-key=[8] '523c5c0100000000'
	[  +0.000791] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000962] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=0000000044435d8b
	[  +0.001054] FS-Cache: N-key=[8] '523c5c0100000000'
	[  +3.160002] FS-Cache: Duplicate cookie detected
	[  +0.000835] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001129] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=0000000086378eab
	[  +0.001313] FS-Cache: O-key=[8] '513c5c0100000000'
	[  +0.000789] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001103] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=0000000073249069
	[  +0.001281] FS-Cache: N-key=[8] '513c5c0100000000'
	[  +0.406244] FS-Cache: Duplicate cookie detected
	[  +0.000751] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001144] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=00000000fcf6afbd
	[  +0.001081] FS-Cache: O-key=[8] '573c5c0100000000'
	[  +0.000734] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001120] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=00000000e456345f
	[  +0.001247] FS-Cache: N-key=[8] '573c5c0100000000'
	
	
	==> etcd [25a2a92d13c4f5f3430d8aed70d2001e199f8f3b24c0c63db153121c5b95c14d] <==
	raft2024/02/14 00:31:38 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/02/14 00:31:38 INFO: aec36adc501070cc became follower at term 1
	raft2024/02/14 00:31:38 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-14 00:31:38.435712 W | auth: simple token is not cryptographically signed
	2024-02-14 00:31:38.444001 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-02-14 00:31:38.446157 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/02/14 00:31:38 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-14 00:31:38.446408 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-02-14 00:31:38.447198 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-02-14 00:31:38.447330 I | embed: listening for peers on 192.168.49.2:2380
	2024-02-14 00:31:38.447381 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/02/14 00:31:39 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/02/14 00:31:39 INFO: aec36adc501070cc became candidate at term 2
	raft2024/02/14 00:31:39 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/02/14 00:31:39 INFO: aec36adc501070cc became leader at term 2
	raft2024/02/14 00:31:39 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-02-14 00:31:39.444690 I | etcdserver: published {Name:ingress-addon-legacy-592927 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-02-14 00:31:39.463520 I | embed: ready to serve client requests
	2024-02-14 00:31:39.471102 I | embed: serving client requests on 127.0.0.1:2379
	2024-02-14 00:31:39.535942 I | etcdserver: setting up the initial cluster version to 3.4
	2024-02-14 00:31:39.647359 I | embed: ready to serve client requests
	2024-02-14 00:31:39.655049 I | embed: serving client requests on 192.168.49.2:2379
	2024-02-14 00:31:39.854188 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-02-14 00:31:39.934021 I | etcdserver/api: enabled capabilities for version 3.4
	2024-02-14 00:31:39.969857 W | etcdserver: request "ID:8128027167418990084 Method:\"PUT\" Path:\"/0/version\" Val:\"3.4.0\" " with result "" took too long (123.908014ms) to execute
	
	
	==> kernel <==
	 00:35:34 up  3:17,  0 users,  load average: 0.17, 1.03, 1.64
	Linux ingress-addon-legacy-592927 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [a7b5ec97b82993f356b9f171fa8807b8e525f692e5ab8422ddbc0a6dc861e5c9] <==
	I0214 00:33:28.188921       1 main.go:227] handling current node
	I0214 00:33:38.200082       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:33:38.200113       1 main.go:227] handling current node
	I0214 00:33:48.210418       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:33:48.210443       1 main.go:227] handling current node
	I0214 00:33:58.216736       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:33:58.216765       1 main.go:227] handling current node
	I0214 00:34:08.220043       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:34:08.220073       1 main.go:227] handling current node
	I0214 00:34:18.231603       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:34:18.231631       1 main.go:227] handling current node
	I0214 00:34:28.242022       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:34:28.242050       1 main.go:227] handling current node
	I0214 00:34:38.253970       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:34:38.254082       1 main.go:227] handling current node
	I0214 00:34:48.260176       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:34:48.260203       1 main.go:227] handling current node
	I0214 00:34:58.271872       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:34:58.271899       1 main.go:227] handling current node
	I0214 00:35:08.325538       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:35:08.325664       1 main.go:227] handling current node
	I0214 00:35:18.335846       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:35:18.335876       1 main.go:227] handling current node
	I0214 00:35:28.340044       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 00:35:28.340071       1 main.go:227] handling current node
	
	
	==> kube-apiserver [b7ec385930f82d0230b6eb45e54e86ae95e79eeb91c4d02f32e6aaa5077bd557] <==
	I0214 00:31:46.413767       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0214 00:31:46.508394       1 cache.go:39] Caches are synced for autoregister controller
	I0214 00:31:46.508882       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0214 00:31:46.512971       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0214 00:31:46.513051       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0214 00:31:47.207532       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0214 00:31:47.207563       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0214 00:31:47.216997       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0214 00:31:47.221780       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0214 00:31:47.221802       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0214 00:31:47.677288       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 00:31:47.713546       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0214 00:31:47.777036       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0214 00:31:47.778129       1 controller.go:609] quota admission added evaluator for: endpoints
	I0214 00:31:47.781973       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0214 00:31:48.666806       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0214 00:31:49.227512       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0214 00:31:49.352332       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0214 00:31:52.656193       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 00:32:04.712193       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0214 00:32:05.038107       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0214 00:32:22.613616       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0214 00:32:47.390959       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0214 00:35:26.288828       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E0214 00:35:27.931108       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [e6f2cf72e3f1fb4d4a2c16e0f944dbe00d6b3e4d905489f7c60588d9b65ae935] <==
	I0214 00:32:05.124452       1 shared_informer.go:230] Caches are synced for resource quota 
	I0214 00:32:05.149817       1 shared_informer.go:230] Caches are synced for resource quota 
	I0214 00:32:05.156310       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0214 00:32:05.163342       1 shared_informer.go:230] Caches are synced for attach detach 
	I0214 00:32:05.245877       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0214 00:32:05.245974       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0214 00:32:05.268362       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"eafa1013-0cae-42a8-879f-4129cdfac22a", ResourceVersion:"226", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63843467509, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40014c13e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40014c1400)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40014c1420), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40014c1440), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40014c1460), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40014c1480), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40014c14a0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40014c14e0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001030370), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000b44dd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400074ae70), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40014487a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000b44e40)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0214 00:32:05.317800       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3a38d869-9e0a-4972-bc44-3f68b3d9db15", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	E0214 00:32:05.327092       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"409fd33e-54a8-402c-8471-c223caaed814", ResourceVersion:"213", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63843467509, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40014c12c0), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x40014c12e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40014c1300), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000bb1180), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x40014c1320), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40014c1340), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40014c1380)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40010300f0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000b44b38), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400074ad90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001448798)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000b44b98)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0214 00:32:05.513478       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c8a0d0b9-cb09-4d2b-b21b-57242c3efb69", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-rwr8m
	I0214 00:32:05.567586       1 request.go:621] Throttling request took 1.022955678s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	I0214 00:32:06.018951       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0214 00:32:06.019003       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0214 00:32:14.965818       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0214 00:32:22.595381       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"eceab646-0efd-4a74-b891-e744da5fddb3", APIVersion:"apps/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0214 00:32:22.614897       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"feb3ba0c-6b50-454e-9425-16c6f394fe44", APIVersion:"apps/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-dxl66
	I0214 00:32:22.651436       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"51e3a3b6-97b1-4ec2-aae0-507f6a139321", APIVersion:"batch/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-69z42
	I0214 00:32:22.722873       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e0dcbf52-e4d3-46bf-89fa-e00f4ae888c2", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-q8l9p
	I0214 00:32:25.921492       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"51e3a3b6-97b1-4ec2-aae0-507f6a139321", APIVersion:"batch/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0214 00:32:25.927385       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e0dcbf52-e4d3-46bf-89fa-e00f4ae888c2", APIVersion:"batch/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0214 00:35:08.047350       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"c0e497d6-9333-4076-a6b7-0d86b2d56af6", APIVersion:"apps/v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0214 00:35:08.070544       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"20b00f06-739b-41ee-ba7d-f49d76b33a68", APIVersion:"apps/v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-2j6xf
	E0214 00:35:30.942791       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-cs47j" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [f1fe20d8f61789526b384ca263282ff6504a74101316980f90ad3d19397f366f] <==
	W0214 00:32:05.981471       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0214 00:32:05.992779       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0214 00:32:05.992814       1 server_others.go:186] Using iptables Proxier.
	I0214 00:32:05.993092       1 server.go:583] Version: v1.18.20
	I0214 00:32:05.993831       1 config.go:315] Starting service config controller
	I0214 00:32:05.993854       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0214 00:32:05.993883       1 config.go:133] Starting endpoints config controller
	I0214 00:32:05.993891       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0214 00:32:06.094063       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0214 00:32:06.094158       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [a9fd47450ca5dc41b7f55126e444adcb7d81813700445ddb76e58e73a039ff7c] <==
	I0214 00:31:46.424857       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0214 00:31:46.427677       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0214 00:31:46.427844       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 00:31:46.427910       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 00:31:46.427959       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0214 00:31:46.433031       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 00:31:46.433071       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 00:31:46.433156       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 00:31:46.433249       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0214 00:31:46.433297       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0214 00:31:46.436570       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0214 00:31:46.436615       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 00:31:46.436695       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0214 00:31:46.436830       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0214 00:31:46.436947       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0214 00:31:46.436974       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 00:31:46.437067       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0214 00:31:47.254564       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 00:31:47.312316       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0214 00:31:47.483166       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 00:31:47.483243       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0214 00:31:47.728043       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0214 00:32:04.781661       1 factory.go:503] pod: kube-system/coredns-66bff467f8-rwr8m is already present in the active queue
	E0214 00:32:04.783478       1 factory.go:503] pod: kube-system/coredns-66bff467f8-2xhbc is already present in the active queue
	E0214 00:32:05.948890       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	
	==> kubelet <==
	Feb 14 00:35:12 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:12.365231    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 43df5f9dcc201972a8285ddd7c294f2ca7ea02589ecf3e2a9e59f1cdbf22015e
	Feb 14 00:35:12 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:12.365497    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0e226dbe1301bf18ef21ad80b308ee5efeb3fab9127a36a667ce441951f76df0
	Feb 14 00:35:12 ingress-addon-legacy-592927 kubelet[1615]: E0214 00:35:12.365761    1615 pod_workers.go:191] Error syncing pod bc79c1cc-2b5d-418f-bb8a-d3eaae0299fe ("hello-world-app-5f5d8b66bb-2j6xf_default(bc79c1cc-2b5d-418f-bb8a-d3eaae0299fe)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-2j6xf_default(bc79c1cc-2b5d-418f-bb8a-d3eaae0299fe)"
	Feb 14 00:35:13 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:13.367885    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0e226dbe1301bf18ef21ad80b308ee5efeb3fab9127a36a667ce441951f76df0
	Feb 14 00:35:13 ingress-addon-legacy-592927 kubelet[1615]: E0214 00:35:13.369445    1615 pod_workers.go:191] Error syncing pod bc79c1cc-2b5d-418f-bb8a-d3eaae0299fe ("hello-world-app-5f5d8b66bb-2j6xf_default(bc79c1cc-2b5d-418f-bb8a-d3eaae0299fe)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-2j6xf_default(bc79c1cc-2b5d-418f-bb8a-d3eaae0299fe)"
	Feb 14 00:35:16 ingress-addon-legacy-592927 kubelet[1615]: E0214 00:35:16.675862    1615 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 14 00:35:16 ingress-addon-legacy-592927 kubelet[1615]: E0214 00:35:16.675897    1615 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 14 00:35:16 ingress-addon-legacy-592927 kubelet[1615]: E0214 00:35:16.675943    1615 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 14 00:35:16 ingress-addon-legacy-592927 kubelet[1615]: E0214 00:35:16.675976    1615 pod_workers.go:191] Error syncing pod a29d2441-cc8c-4b28-88fc-8bb6d78d5a6b ("kube-ingress-dns-minikube_kube-system(a29d2441-cc8c-4b28-88fc-8bb6d78d5a6b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Feb 14 00:35:23 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:23.978678    1615 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-26m76" (UniqueName: "kubernetes.io/secret/a29d2441-cc8c-4b28-88fc-8bb6d78d5a6b-minikube-ingress-dns-token-26m76") pod "a29d2441-cc8c-4b28-88fc-8bb6d78d5a6b" (UID: "a29d2441-cc8c-4b28-88fc-8bb6d78d5a6b")
	Feb 14 00:35:23 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:23.982798    1615 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29d2441-cc8c-4b28-88fc-8bb6d78d5a6b-minikube-ingress-dns-token-26m76" (OuterVolumeSpecName: "minikube-ingress-dns-token-26m76") pod "a29d2441-cc8c-4b28-88fc-8bb6d78d5a6b" (UID: "a29d2441-cc8c-4b28-88fc-8bb6d78d5a6b"). InnerVolumeSpecName "minikube-ingress-dns-token-26m76". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 14 00:35:24 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:24.079091    1615 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-26m76" (UniqueName: "kubernetes.io/secret/a29d2441-cc8c-4b28-88fc-8bb6d78d5a6b-minikube-ingress-dns-token-26m76") on node "ingress-addon-legacy-592927" DevicePath ""
	Feb 14 00:35:26 ingress-addon-legacy-592927 kubelet[1615]: E0214 00:35:26.277586    1615 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dxl66.17b3938f0add0e21", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dxl66", UID:"97868945-a9ac-4970-a866-91154a29ff77", APIVersion:"v1", ResourceVersion:"482", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-592927"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b2093906a8221, ext:217093494556, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b2093906a8221, ext:217093494556, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dxl66.17b3938f0add0e21" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 14 00:35:26 ingress-addon-legacy-592927 kubelet[1615]: E0214 00:35:26.306770    1615 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dxl66.17b3938f0add0e21", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dxl66", UID:"97868945-a9ac-4970-a866-91154a29ff77", APIVersion:"v1", ResourceVersion:"482", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-592927"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b2093906a8221, ext:217093494556, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b209392024848, ext:217120218426, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dxl66.17b3938f0add0e21" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 14 00:35:28 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:28.675039    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0e226dbe1301bf18ef21ad80b308ee5efeb3fab9127a36a667ce441951f76df0
	Feb 14 00:35:29 ingress-addon-legacy-592927 kubelet[1615]: W0214 00:35:29.400924    1615 pod_container_deletor.go:77] Container "27194634af792bec5210a86708c056269c66992f4a8bbab48e6e134caa38b938" not found in pod's containers
	Feb 14 00:35:29 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:29.402799    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0e226dbe1301bf18ef21ad80b308ee5efeb3fab9127a36a667ce441951f76df0
	Feb 14 00:35:29 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:29.403058    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d71968b5a8320d2548a1eb12d61895f7405d2f13bf63951dbf1efa86b6292c23
	Feb 14 00:35:29 ingress-addon-legacy-592927 kubelet[1615]: E0214 00:35:29.403313    1615 pod_workers.go:191] Error syncing pod bc79c1cc-2b5d-418f-bb8a-d3eaae0299fe ("hello-world-app-5f5d8b66bb-2j6xf_default(bc79c1cc-2b5d-418f-bb8a-d3eaae0299fe)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-2j6xf_default(bc79c1cc-2b5d-418f-bb8a-d3eaae0299fe)"
	Feb 14 00:35:30 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:30.393591    1615 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/97868945-a9ac-4970-a866-91154a29ff77-webhook-cert") pod "97868945-a9ac-4970-a866-91154a29ff77" (UID: "97868945-a9ac-4970-a866-91154a29ff77")
	Feb 14 00:35:30 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:30.393648    1615 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-6lfbn" (UniqueName: "kubernetes.io/secret/97868945-a9ac-4970-a866-91154a29ff77-ingress-nginx-token-6lfbn") pod "97868945-a9ac-4970-a866-91154a29ff77" (UID: "97868945-a9ac-4970-a866-91154a29ff77")
	Feb 14 00:35:30 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:30.399291    1615 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97868945-a9ac-4970-a866-91154a29ff77-ingress-nginx-token-6lfbn" (OuterVolumeSpecName: "ingress-nginx-token-6lfbn") pod "97868945-a9ac-4970-a866-91154a29ff77" (UID: "97868945-a9ac-4970-a866-91154a29ff77"). InnerVolumeSpecName "ingress-nginx-token-6lfbn". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 14 00:35:30 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:30.401859    1615 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97868945-a9ac-4970-a866-91154a29ff77-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "97868945-a9ac-4970-a866-91154a29ff77" (UID: "97868945-a9ac-4970-a866-91154a29ff77"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 14 00:35:30 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:30.493976    1615 reconciler.go:319] Volume detached for volume "ingress-nginx-token-6lfbn" (UniqueName: "kubernetes.io/secret/97868945-a9ac-4970-a866-91154a29ff77-ingress-nginx-token-6lfbn") on node "ingress-addon-legacy-592927" DevicePath ""
	Feb 14 00:35:30 ingress-addon-legacy-592927 kubelet[1615]: I0214 00:35:30.494017    1615 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/97868945-a9ac-4970-a866-91154a29ff77-webhook-cert") on node "ingress-addon-legacy-592927" DevicePath ""
	
	
	==> storage-provisioner [fb1c6622e5ee6e626753e4a6a9316ba0676e77be919b392f326c41ecf1b31adb] <==
	I0214 00:32:18.208348       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 00:32:18.222590       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 00:32:18.222713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 00:32:18.230623       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 00:32:18.231036       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-592927_248cf686-b399-476e-9899-6d0d1cb2ccd2!
	I0214 00:32:18.231103       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e194e8a-668f-4dfa-b79d-79baeebc13db", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-592927_248cf686-b399-476e-9899-6d0d1cb2ccd2 became leader
	I0214 00:32:18.331677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-592927_248cf686-b399-476e-9899-6d0d1cb2ccd2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-592927 -n ingress-addon-legacy-592927
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-592927 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (181.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-644788 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0214 01:01:29.720497  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-644788 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (53.695156509s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-644788] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-644788 in cluster pause-644788
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Updating the running docker "pause-644788" container ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-644788" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 01:01:05.759073  631285 out.go:291] Setting OutFile to fd 1 ...
	I0214 01:01:05.759216  631285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 01:01:05.759223  631285 out.go:304] Setting ErrFile to fd 2...
	I0214 01:01:05.759230  631285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 01:01:05.759485  631285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 01:01:05.759850  631285 out.go:298] Setting JSON to false
	I0214 01:01:05.760940  631285 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13409,"bootTime":1707859057,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0214 01:01:05.761014  631285 start.go:138] virtualization:  
	I0214 01:01:05.764063  631285 out.go:177] * [pause-644788] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 01:01:05.766841  631285 out.go:177]   - MINIKUBE_LOCATION=18169
	I0214 01:01:05.768657  631285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 01:01:05.766937  631285 notify.go:220] Checking for updates...
	I0214 01:01:05.774005  631285 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 01:01:05.776149  631285 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	I0214 01:01:05.779021  631285 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 01:01:05.781021  631285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 01:01:05.784017  631285 config.go:182] Loaded profile config "pause-644788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 01:01:05.784767  631285 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 01:01:05.811270  631285 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 01:01:05.811423  631285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 01:01:05.905545  631285 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-14 01:01:05.886904634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 01:01:05.905645  631285 docker.go:295] overlay module found
	I0214 01:01:05.909548  631285 out.go:177] * Using the docker driver based on existing profile
	I0214 01:01:05.911888  631285 start.go:298] selected driver: docker
	I0214 01:01:05.911903  631285 start.go:902] validating driver "docker" against &{Name:pause-644788 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-644788 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 01:01:05.912132  631285 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 01:01:05.912234  631285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 01:01:05.975221  631285 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-14 01:01:05.966215359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 01:01:05.975671  631285 cni.go:84] Creating CNI manager for ""
	I0214 01:01:05.975690  631285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 01:01:05.975705  631285 start_flags.go:321] config:
	{Name:pause-644788 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-644788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false s
torage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 01:01:05.978039  631285 out.go:177] * Starting control plane node pause-644788 in cluster pause-644788
	I0214 01:01:05.979950  631285 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 01:01:05.981953  631285 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 01:01:05.983834  631285 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0214 01:01:05.983892  631285 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0214 01:01:05.983905  631285 cache.go:56] Caching tarball of preloaded images
	I0214 01:01:05.983907  631285 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 01:01:05.983985  631285 preload.go:174] Found /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0214 01:01:05.983995  631285 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0214 01:01:05.984128  631285 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/config.json ...
	I0214 01:01:05.999689  631285 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0214 01:01:05.999749  631285 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0214 01:01:05.999772  631285 cache.go:194] Successfully downloaded all kic artifacts
	I0214 01:01:05.999822  631285 start.go:365] acquiring machines lock for pause-644788: {Name:mk6e8b525ccd8281aba194ce931c4280f2d853cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 01:01:05.999940  631285 start.go:369] acquired machines lock for "pause-644788" in 80.36µs
	I0214 01:01:05.999967  631285 start.go:96] Skipping create...Using existing machine configuration
	I0214 01:01:05.999976  631285 fix.go:54] fixHost starting: 
	I0214 01:01:06.000277  631285 cli_runner.go:164] Run: docker container inspect pause-644788 --format={{.State.Status}}
	I0214 01:01:06.020169  631285 fix.go:102] recreateIfNeeded on pause-644788: state=Running err=<nil>
	W0214 01:01:06.020202  631285 fix.go:128] unexpected machine state, will restart: <nil>
	I0214 01:01:06.022804  631285 out.go:177] * Updating the running docker "pause-644788" container ...
	I0214 01:01:06.025383  631285 machine.go:88] provisioning docker machine ...
	I0214 01:01:06.025423  631285 ubuntu.go:169] provisioning hostname "pause-644788"
	I0214 01:01:06.025527  631285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-644788
	I0214 01:01:06.045266  631285 main.go:141] libmachine: Using SSH client type: native
	I0214 01:01:06.045779  631285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33587 <nil> <nil>}
	I0214 01:01:06.045823  631285 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-644788 && echo "pause-644788" | sudo tee /etc/hostname
	I0214 01:01:06.189440  631285 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-644788
	
	I0214 01:01:06.189586  631285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-644788
	I0214 01:01:06.207821  631285 main.go:141] libmachine: Using SSH client type: native
	I0214 01:01:06.208245  631285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33587 <nil> <nil>}
	I0214 01:01:06.208269  631285 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-644788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-644788/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-644788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 01:01:06.337833  631285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 01:01:06.337862  631285 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18169-498689/.minikube CaCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18169-498689/.minikube}
	I0214 01:01:06.337885  631285 ubuntu.go:177] setting up certificates
	I0214 01:01:06.337896  631285 provision.go:83] configureAuth start
	I0214 01:01:06.337954  631285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-644788
	I0214 01:01:06.354385  631285 provision.go:138] copyHostCerts
	I0214 01:01:06.354464  631285 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-498689/.minikube/ca.pem, removing ...
	I0214 01:01:06.354477  631285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-498689/.minikube/ca.pem
	I0214 01:01:06.354556  631285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/ca.pem (1078 bytes)
	I0214 01:01:06.354664  631285 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-498689/.minikube/cert.pem, removing ...
	I0214 01:01:06.354674  631285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-498689/.minikube/cert.pem
	I0214 01:01:06.354702  631285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/cert.pem (1123 bytes)
	I0214 01:01:06.354765  631285 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-498689/.minikube/key.pem, removing ...
	I0214 01:01:06.354774  631285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-498689/.minikube/key.pem
	I0214 01:01:06.354797  631285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/key.pem (1675 bytes)
	I0214 01:01:06.354863  631285 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem org=jenkins.pause-644788 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-644788]
	I0214 01:01:06.644671  631285 provision.go:172] copyRemoteCerts
	I0214 01:01:06.644744  631285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 01:01:06.644793  631285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-644788
	I0214 01:01:06.661413  631285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33587 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/pause-644788/id_rsa Username:docker}
	I0214 01:01:06.758950  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0214 01:01:06.784064  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0214 01:01:06.809921  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 01:01:06.835676  631285 provision.go:86] duration metric: configureAuth took 497.766825ms
	I0214 01:01:06.835702  631285 ubuntu.go:193] setting minikube options for container-runtime
	I0214 01:01:06.835948  631285 config.go:182] Loaded profile config "pause-644788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 01:01:06.836063  631285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-644788
	I0214 01:01:06.860742  631285 main.go:141] libmachine: Using SSH client type: native
	I0214 01:01:06.861176  631285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33587 <nil> <nil>}
	I0214 01:01:06.861197  631285 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 01:01:12.285420  631285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 01:01:12.285451  631285 machine.go:91] provisioned docker machine in 6.260034134s
	I0214 01:01:12.285463  631285 start.go:300] post-start starting for "pause-644788" (driver="docker")
	I0214 01:01:12.285480  631285 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 01:01:12.285580  631285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 01:01:12.285651  631285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-644788
	I0214 01:01:12.319933  631285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33587 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/pause-644788/id_rsa Username:docker}
	I0214 01:01:12.424404  631285 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 01:01:12.428263  631285 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 01:01:12.428303  631285 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 01:01:12.428315  631285 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 01:01:12.428326  631285 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 01:01:12.428336  631285 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-498689/.minikube/addons for local assets ...
	I0214 01:01:12.428392  631285 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-498689/.minikube/files for local assets ...
	I0214 01:01:12.428473  631285 filesync.go:149] local asset: /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem -> 5040612.pem in /etc/ssl/certs
	I0214 01:01:12.428584  631285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 01:01:12.438174  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem --> /etc/ssl/certs/5040612.pem (1708 bytes)
	I0214 01:01:12.465242  631285 start.go:303] post-start completed in 179.763593ms
	I0214 01:01:12.465325  631285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 01:01:12.465368  631285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-644788
	I0214 01:01:12.485479  631285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33587 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/pause-644788/id_rsa Username:docker}
	I0214 01:01:12.587457  631285 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 01:01:12.592525  631285 fix.go:56] fixHost completed within 6.592542548s
	I0214 01:01:12.592547  631285 start.go:83] releasing machines lock for "pause-644788", held for 6.592594339s
	I0214 01:01:12.592650  631285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-644788
	I0214 01:01:12.610618  631285 ssh_runner.go:195] Run: cat /version.json
	I0214 01:01:12.610668  631285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-644788
	I0214 01:01:12.610955  631285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 01:01:12.611015  631285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-644788
	I0214 01:01:12.650684  631285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33587 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/pause-644788/id_rsa Username:docker}
	I0214 01:01:12.659533  631285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33587 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/pause-644788/id_rsa Username:docker}
	I0214 01:01:12.765543  631285 ssh_runner.go:195] Run: systemctl --version
	I0214 01:01:13.039065  631285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 01:01:13.391425  631285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 01:01:13.413266  631285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 01:01:13.443247  631285 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0214 01:01:13.443347  631285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 01:01:13.460332  631285 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0214 01:01:13.460361  631285 start.go:475] detecting cgroup driver to use...
	I0214 01:01:13.460394  631285 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 01:01:13.460477  631285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 01:01:13.498252  631285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 01:01:13.529873  631285 docker.go:217] disabling cri-docker service (if available) ...
	I0214 01:01:13.529943  631285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 01:01:13.559522  631285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 01:01:13.592779  631285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 01:01:13.894666  631285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 01:01:14.201881  631285 docker.go:233] disabling docker service ...
	I0214 01:01:14.201971  631285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 01:01:14.221481  631285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 01:01:14.276361  631285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 01:01:14.560381  631285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 01:01:14.844132  631285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 01:01:14.875731  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 01:01:14.932719  631285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0214 01:01:14.932822  631285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 01:01:14.961960  631285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 01:01:14.962034  631285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 01:01:14.986397  631285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 01:01:15.013221  631285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 01:01:15.041364  631285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 01:01:15.086198  631285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 01:01:15.110641  631285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 01:01:15.138672  631285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 01:01:15.388969  631285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 01:01:24.232247  631285 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.84324465s)
	I0214 01:01:24.232270  631285 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 01:01:24.232319  631285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 01:01:24.238217  631285 start.go:543] Will wait 60s for crictl version
	I0214 01:01:24.238283  631285 ssh_runner.go:195] Run: which crictl
	I0214 01:01:24.243310  631285 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 01:01:24.330473  631285 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0214 01:01:24.330561  631285 ssh_runner.go:195] Run: crio --version
	I0214 01:01:24.410815  631285 ssh_runner.go:195] Run: crio --version
	I0214 01:01:24.496231  631285 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0214 01:01:24.498477  631285 cli_runner.go:164] Run: docker network inspect pause-644788 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 01:01:24.540934  631285 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0214 01:01:24.545221  631285 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0214 01:01:24.545291  631285 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 01:01:24.634808  631285 crio.go:496] all images are preloaded for cri-o runtime.
	I0214 01:01:24.634833  631285 crio.go:415] Images already preloaded, skipping extraction
	I0214 01:01:24.634906  631285 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 01:01:24.705024  631285 crio.go:496] all images are preloaded for cri-o runtime.
	I0214 01:01:24.705046  631285 cache_images.go:84] Images are preloaded, skipping loading
	I0214 01:01:24.705121  631285 ssh_runner.go:195] Run: crio config
	I0214 01:01:24.790595  631285 cni.go:84] Creating CNI manager for ""
	I0214 01:01:24.790618  631285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 01:01:24.790637  631285 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 01:01:24.790656  631285 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-644788 NodeName:pause-644788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 01:01:24.790812  631285 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-644788"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 01:01:24.790911  631285 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-644788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-644788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0214 01:01:24.790981  631285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0214 01:01:24.799982  631285 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 01:01:24.800062  631285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 01:01:24.808403  631285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0214 01:01:24.826495  631285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 01:01:24.843823  631285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0214 01:01:24.861179  631285 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0214 01:01:24.864967  631285 certs.go:56] Setting up /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788 for IP: 192.168.76.2
	I0214 01:01:24.865007  631285 certs.go:190] acquiring lock for shared ca certs: {Name:mk24bda5a01a6d67ca318fbbda66875cef4a1a9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 01:01:24.865134  631285 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18169-498689/.minikube/ca.key
	I0214 01:01:24.865189  631285 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.key
	I0214 01:01:24.865278  631285 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/client.key
	I0214 01:01:24.865348  631285 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/apiserver.key.31bdca25
	I0214 01:01:24.865396  631285 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/proxy-client.key
	I0214 01:01:24.865513  631285 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/504061.pem (1338 bytes)
	W0214 01:01:24.865549  631285 certs.go:433] ignoring /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/504061_empty.pem, impossibly tiny 0 bytes
	I0214 01:01:24.865568  631285 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem (1679 bytes)
	I0214 01:01:24.865600  631285 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem (1078 bytes)
	I0214 01:01:24.865628  631285 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem (1123 bytes)
	I0214 01:01:24.865659  631285 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem (1675 bytes)
	I0214 01:01:24.865713  631285 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem (1708 bytes)
	I0214 01:01:24.866357  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 01:01:24.899991  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 01:01:24.938400  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 01:01:24.979388  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 01:01:25.022025  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 01:01:25.065984  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 01:01:25.108423  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 01:01:25.148040  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0214 01:01:25.187125  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/certs/504061.pem --> /usr/share/ca-certificates/504061.pem (1338 bytes)
	I0214 01:01:25.230883  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem --> /usr/share/ca-certificates/5040612.pem (1708 bytes)
	I0214 01:01:25.278997  631285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 01:01:25.310912  631285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 01:01:25.334284  631285 ssh_runner.go:195] Run: openssl version
	I0214 01:01:25.340111  631285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/504061.pem && ln -fs /usr/share/ca-certificates/504061.pem /etc/ssl/certs/504061.pem"
	I0214 01:01:25.354976  631285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/504061.pem
	I0214 01:01:25.360513  631285 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 00:26 /usr/share/ca-certificates/504061.pem
	I0214 01:01:25.360588  631285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/504061.pem
	I0214 01:01:25.371256  631285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/504061.pem /etc/ssl/certs/51391683.0"
	I0214 01:01:25.383794  631285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040612.pem && ln -fs /usr/share/ca-certificates/5040612.pem /etc/ssl/certs/5040612.pem"
	I0214 01:01:25.396194  631285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5040612.pem
	I0214 01:01:25.400113  631285 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 00:26 /usr/share/ca-certificates/5040612.pem
	I0214 01:01:25.400179  631285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040612.pem
	I0214 01:01:25.408427  631285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5040612.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 01:01:25.422750  631285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 01:01:25.432633  631285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 01:01:25.436512  631285 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 00:19 /usr/share/ca-certificates/minikubeCA.pem
	I0214 01:01:25.436585  631285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 01:01:25.447923  631285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 01:01:25.463181  631285 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 01:01:25.466893  631285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0214 01:01:25.475852  631285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0214 01:01:25.486456  631285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0214 01:01:25.496656  631285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0214 01:01:25.507380  631285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0214 01:01:25.518568  631285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0214 01:01:25.529582  631285 kubeadm.go:404] StartCluster: {Name:pause-644788 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-644788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-al
iases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 01:01:25.529706  631285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 01:01:25.529781  631285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 01:01:25.589806  631285 cri.go:89] found id: "fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7"
	I0214 01:01:25.589833  631285 cri.go:89] found id: "1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556"
	I0214 01:01:25.589839  631285 cri.go:89] found id: "9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc"
	I0214 01:01:25.589844  631285 cri.go:89] found id: "6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257"
	I0214 01:01:25.589848  631285 cri.go:89] found id: "827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6"
	I0214 01:01:25.589853  631285 cri.go:89] found id: "b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314"
	I0214 01:01:25.589857  631285 cri.go:89] found id: "378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15"
	I0214 01:01:25.589861  631285 cri.go:89] found id: "090931a4d720e9f5af76785cb50e723699a5515967278e2889721ff0e4b3d96d"
	I0214 01:01:25.589866  631285 cri.go:89] found id: "a5c07c1af086efc1699569ac480ad9a53352042db016a74db405a9ebdcd9ce95"
	I0214 01:01:25.589873  631285 cri.go:89] found id: "23586c9364439c469cd850ca3466e08e9f53ed48c3c24d45a852824fdabb3c4d"
	I0214 01:01:25.589879  631285 cri.go:89] found id: "8b9bc52559cbf14d19f7da83d1cb4837614b5904b896b091f82c459988dcd82e"
	I0214 01:01:25.589887  631285 cri.go:89] found id: "61c66c21d744ad1eb60ed7f7766f6a9801eac623e67e0049153f6d2eb45012ba"
	I0214 01:01:25.589896  631285 cri.go:89] found id: "aa8f37be653fa5260cc7463da7529cb866e0ab511c6be3899b775f01798b571e"
	I0214 01:01:25.589902  631285 cri.go:89] found id: "a7101836fc16c98778a0190b45cb702f19b3b5c395ecd6efdcea9736cdcb0acd"
	I0214 01:01:25.589906  631285 cri.go:89] found id: ""
	I0214 01:01:25.589958  631285 ssh_runner.go:195] Run: sudo runc list -f json
	I0214 01:01:25.662680  631285 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"090931a4d720e9f5af76785cb50e723699a5515967278e2889721ff0e4b3d96d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/090931a4d720e9f5af76785cb50e723699a5515967278e2889721ff0e4b3d96d/userdata","rootfs":"/var/lib/containers/storage/overlay/4879c835098f7dab67b3def1e431175ebbaec710b8879fbe9589a4752be075d0/merged","created":"2024-02-14T01:01:03.414754261Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"82945af1","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-
o.Annotations":"{\"io.kubernetes.container.hash\":\"82945af1\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"090931a4d720e9f5af76785cb50e723699a5515967278e2889721ff0e4b3d96d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:01:03.384080655Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kube
rnetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-blr8m\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"79232acc-f48d-4b46-8c04-17e044441e02\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-blr8m_79232acc-f48d-4b46-8c04-17e044441e02/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4879c835098f7dab67b3def1e431175ebbaec710b8879fbe9589a4752be075d0/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-blr8m_kube-system_79232acc-f48d-4b46-8c04-17e044441e02_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b9c06cd1598992ebca09f7225fc3aca16471488793bbee8edc87dc60b0ce23fe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b9c06cd1598992ebca09f7225fc3aca1647148
8793bbee8edc87dc60b0ce23fe","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-blr8m_kube-system_79232acc-f48d-4b46-8c04-17e044441e02_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/79232acc-f48d-4b46-8c04-17e044441e02/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/79232acc-f48d-4b46-8c04-17e044441e02/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/79232acc-f48d-4b46-8c04-17e044441e02/containers/coredns/a2884c58\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\
"host_path\":\"/var/lib/kubelet/pods/79232acc-f48d-4b46-8c04-17e044441e02/volumes/kubernetes.io~projected/kube-api-access-q7f8g\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-blr8m","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"79232acc-f48d-4b46-8c04-17e044441e02","kubernetes.io/config.seen":"2024-02-14T01:01:03.000499019Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556/userdata","rootfs":"/var/lib/containers/storage/overlay/f81923263a6636a4bff727eb978319a921067b6353d23db879584531c0d44787/merged","created":"2024-02-14T01:01:13.297682002Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e1639c7a","io.k
ubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e1639c7a\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:01:13.134158628Z","io.kubernetes.cri-o.Image":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri-o.ImageRef":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kuber
netes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-644788\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"619859cb6c9bff0d1fcc56f5d5fafe66\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-644788_619859cb6c9bff0d1fcc56f5d5fafe66/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f81923263a6636a4bff727eb978319a921067b6353d23db879584531c0d44787/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-644788_kube-system_619859cb6c9bff0d1fcc56f5d5fafe66_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/501097f7d088def7ea48ecd728f1f48ff07ca660447ad103adce3ce833444417/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"501097f7d088def7ea48ecd728f1f48ff07ca660447ad103adce3ce833444417","io.kubernetes.cri-o.SandboxName
":"k8s_kube-scheduler-pause-644788_kube-system_619859cb6c9bff0d1fcc56f5d5fafe66_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/619859cb6c9bff0d1fcc56f5d5fafe66/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/619859cb6c9bff0d1fcc56f5d5fafe66/containers/kube-scheduler/12d5a193\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-644788","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"619859cb6c9bff0
d1fcc56f5d5fafe66","kubernetes.io/config.hash":"619859cb6c9bff0d1fcc56f5d5fafe66","kubernetes.io/config.seen":"2024-02-14T01:00:10.544691205Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"23586c9364439c469cd850ca3466e08e9f53ed48c3c24d45a852824fdabb3c4d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/23586c9364439c469cd850ca3466e08e9f53ed48c3c24d45a852824fdabb3c4d/userdata","rootfs":"/var/lib/containers/storage/overlay/6a18f6c9aeaf5beb3b24b8832d0b2d0b7112a5f9381b08588cd8448a4dcd2a42/merged","created":"2024-02-14T01:00:32.521496019Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"47102984","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"47102984\",\"io.kubernetes.containe
r.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"23586c9364439c469cd850ca3466e08e9f53ed48c3c24d45a852824fdabb3c4d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:00:32.466519078Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-nxl78\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2cd1ad76-088c-4810-9812-5fa72cc11eab\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-nxl78_2cd1ad76-08
8c-4810-9812-5fa72cc11eab/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6a18f6c9aeaf5beb3b24b8832d0b2d0b7112a5f9381b08588cd8448a4dcd2a42/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-nxl78_kube-system_2cd1ad76-088c-4810-9812-5fa72cc11eab_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/77eeda6873546623eea6b4ee1ac10115368a58b8d2e1d8374c920b394e9bf798/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"77eeda6873546623eea6b4ee1ac10115368a58b8d2e1d8374c920b394e9bf798","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-nxl78_kube-system_2cd1ad76-088c-4810-9812-5fa72cc11eab_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\"
:0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2cd1ad76-088c-4810-9812-5fa72cc11eab/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2cd1ad76-088c-4810-9812-5fa72cc11eab/containers/kindnet-cni/8adad189\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/2cd1ad76-088c-4810-9812-5fa72cc11eab/volumes/kubernetes.io~projected/kube-api-access-6w7k5\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-nxl78","io.kubernetes.p
od.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2cd1ad76-088c-4810-9812-5fa72cc11eab","kubernetes.io/config.seen":"2024-02-14T01:00:32.106294914Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15/userdata","rootfs":"/var/lib/containers/storage/overlay/f809b035da521f9060fc2161581eb779ac91aaa8f4de8081f1cc2d65db1f4acd/merged","created":"2024-02-14T01:01:13.093331802Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9a968e67","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.conta
iner.hash\":\"9a968e67\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:01:12.909694874Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-644788\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8b240531e6f3c5666f0f30130069f63b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_
etcd-pause-644788_8b240531e6f3c5666f0f30130069f63b/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f809b035da521f9060fc2161581eb779ac91aaa8f4de8081f1cc2d65db1f4acd/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-644788_kube-system_8b240531e6f3c5666f0f30130069f63b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/706bddbd852f853ffe7d7e45d92b902abe6ebe51d8d25ccf07eaedacd7bdfd39/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"706bddbd852f853ffe7d7e45d92b902abe6ebe51d8d25ccf07eaedacd7bdfd39","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-644788_kube-system_8b240531e6f3c5666f0f30130069f63b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8b240531e6f3c5666f0f3
0130069f63b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8b240531e6f3c5666f0f30130069f63b/containers/etcd/7c44a323\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-644788","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8b240531e6f3c5666f0f30130069f63b","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"8b240531e6f3c5666f0f30130069f63b","kubernetes.io/config.seen":"2024-02-14T01:00:10.544681646Z","kubernetes.
io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"61c66c21d744ad1eb60ed7f7766f6a9801eac623e67e0049153f6d2eb45012ba","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/61c66c21d744ad1eb60ed7f7766f6a9801eac623e67e0049153f6d2eb45012ba/userdata","rootfs":"/var/lib/containers/storage/overlay/169bf5f0cc424367c188c865820ca822677bdd98abc33b138a973e374cf1a0b4/merged","created":"2024-02-14T01:00:11.31141287Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e1639c7a","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e1639c7a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\
"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"61c66c21d744ad1eb60ed7f7766f6a9801eac623e67e0049153f6d2eb45012ba","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:00:11.223016036Z","io.kubernetes.cri-o.Image":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri-o.ImageRef":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-644788\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"619859cb6c9bff0d1fcc56f5d5fafe66\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-644788_619859cb6c9bff0d1fcc56f5d5fafe66/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.Mount
Point":"/var/lib/containers/storage/overlay/169bf5f0cc424367c188c865820ca822677bdd98abc33b138a973e374cf1a0b4/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-644788_kube-system_619859cb6c9bff0d1fcc56f5d5fafe66_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/501097f7d088def7ea48ecd728f1f48ff07ca660447ad103adce3ce833444417/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"501097f7d088def7ea48ecd728f1f48ff07ca660447ad103adce3ce833444417","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-644788_kube-system_619859cb6c9bff0d1fcc56f5d5fafe66_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/619859cb6c9bff0d1fcc56f5d5fafe66/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination
-log\",\"host_path\":\"/var/lib/kubelet/pods/619859cb6c9bff0d1fcc56f5d5fafe66/containers/kube-scheduler/2d80bcc7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-644788","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"619859cb6c9bff0d1fcc56f5d5fafe66","kubernetes.io/config.hash":"619859cb6c9bff0d1fcc56f5d5fafe66","kubernetes.io/config.seen":"2024-02-14T01:00:10.544691205Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257/userdata","rootfs":"/var/lib/containers/storage/o
verlay/fda1148567e0178514ec2f2814435c8527aca1380c2444f9d288e78f213fb9c3/merged","created":"2024-02-14T01:01:13.779120322Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"14d1717c","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"14d1717c\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:01:13.090633236Z","io.kubernetes.cri-o.Image":"3ca3ca488cf13fde14cfc4b3ffde0c53a8
c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri-o.ImageRef":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-bnbc8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c162e76e-4f54-45bb-908d-b3e05565dcad\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-bnbc8_c162e76e-4f54-45bb-908d-b3e05565dcad/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fda1148567e0178514ec2f2814435c8527aca1380c2444f9d288e78f213fb9c3/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-bnbc8_kube-system_c162e76e-4f54-45bb-908d-b3e05565dcad_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3b25c68a7bf1193bcecf8715eb16c0f700a1cbb92a5fb2b
8af867aba2ebae310/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3b25c68a7bf1193bcecf8715eb16c0f700a1cbb92a5fb2b8af867aba2ebae310","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-bnbc8_kube-system_c162e76e-4f54-45bb-908d-b3e05565dcad_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c162e76e-4f54-45bb-908d-b3e05565dcad/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c162e76e-4f54-45bb-908d-b3e05565dcad/containers/k
ube-proxy/5a46a337\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/c162e76e-4f54-45bb-908d-b3e05565dcad/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/c162e76e-4f54-45bb-908d-b3e05565dcad/volumes/kubernetes.io~projected/kube-api-access-phvgl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-bnbc8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c162e76e-4f54-45bb-908d-b3e05565dcad","kubernetes.io/config.seen":"2024-02-14T01:00:32.132076499Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6","pid":0,"status":"stopped","bundle"
:"/run/containers/storage/overlay-containers/827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6/userdata","rootfs":"/var/lib/containers/storage/overlay/330fbf7976cc9ddc7dbf9b0955f8427841c9faae014dabd33fecc5850c0c61e6/merged","created":"2024-02-14T01:01:13.288238767Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"47102984","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"47102984\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6","io.kubernetes
.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:01:12.986548401Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-nxl78\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2cd1ad76-088c-4810-9812-5fa72cc11eab\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-nxl78_2cd1ad76-088c-4810-9812-5fa72cc11eab/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/330fbf7976cc9ddc7dbf9b0955f8427841c9faae014dabd33fecc5850c0c61e6/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-nxl78_kube-syste
m_2cd1ad76-088c-4810-9812-5fa72cc11eab_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/77eeda6873546623eea6b4ee1ac10115368a58b8d2e1d8374c920b394e9bf798/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"77eeda6873546623eea6b4ee1ac10115368a58b8d2e1d8374c920b394e9bf798","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-nxl78_kube-system_2cd1ad76-088c-4810-9812-5fa72cc11eab_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2cd1ad76-088c-4810-9812-5fa72cc11eab/etc-hosts\",\"readonly\":false,\"propagation\
":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2cd1ad76-088c-4810-9812-5fa72cc11eab/containers/kindnet-cni/763a0d7a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/2cd1ad76-088c-4810-9812-5fa72cc11eab/volumes/kubernetes.io~projected/kube-api-access-6w7k5\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-nxl78","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2cd1ad76-088c-4810-9812-5fa72cc11eab","kubernetes.io/config.seen":"2024-02-14T01:00:32.106294914Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8b9bc52559cbf14d19f7da83d1cb483
7614b5904b896b091f82c459988dcd82e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8b9bc52559cbf14d19f7da83d1cb4837614b5904b896b091f82c459988dcd82e/userdata","rootfs":"/var/lib/containers/storage/overlay/b41a932d86a18f2768a788aaa36876d0fbdc92676cf3f81e360c487c3996b80b/merged","created":"2024-02-14T01:00:11.303203547Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c95a9554","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c95a9554\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8b9bc52
559cbf14d19f7da83d1cb4837614b5904b896b091f82c459988dcd82e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:00:11.248301978Z","io.kubernetes.cri-o.Image":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri-o.ImageRef":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-644788\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"59aa100ea01b9fd446f1725adf923d20\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-644788_59aa100ea01b9fd446f1725adf923d20/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b41a932d86a18f2768a788aaa36876d0fbdc92676cf3f81e360c487c3996
b80b/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-644788_kube-system_59aa100ea01b9fd446f1725adf923d20_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7af8c2f321950c7bbda8e47955dcf5baa7ef71236794b9f635b4fe942c760ed8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7af8c2f321950c7bbda8e47955dcf5baa7ef71236794b9f635b4fe942c760ed8","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-644788_kube-system_59aa100ea01b9fd446f1725adf923d20_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/59aa100ea01b9fd446f1725adf923d20/containers/kube-apiserver/1c13f9db\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"pro
pagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/59aa100ea01b9fd446f1725adf923d20/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-644788","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"59aa100ea0
1b9fd446f1725adf923d20","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"59aa100ea01b9fd446f1725adf923d20","kubernetes.io/config.seen":"2024-02-14T01:00:10.544688185Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc/userdata","rootfs":"/var/lib/containers/storage/overlay/296aa5ff985c7125cf038c67c4efd381b8a1b0028b017dbf4c64b0b8127fd222/merged","created":"2024-02-14T01:01:13.53470572Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c95a9554","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernete
s.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c95a9554\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:01:13.127460171Z","io.kubernetes.cri-o.Image":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri-o.ImageRef":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-644788\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"59aa100ea01b9fd446f1
725adf923d20\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-644788_59aa100ea01b9fd446f1725adf923d20/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/296aa5ff985c7125cf038c67c4efd381b8a1b0028b017dbf4c64b0b8127fd222/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-644788_kube-system_59aa100ea01b9fd446f1725adf923d20_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7af8c2f321950c7bbda8e47955dcf5baa7ef71236794b9f635b4fe942c760ed8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7af8c2f321950c7bbda8e47955dcf5baa7ef71236794b9f635b4fe942c760ed8","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-644788_kube-system_59aa100ea01b9fd446f1725adf923d20_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":
"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/59aa100ea01b9fd446f1725adf923d20/containers/kube-apiserver/004af473\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/59aa100ea01b9fd446f1725adf923d20/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\
":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-644788","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"59aa100ea01b9fd446f1725adf923d20","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"59aa100ea01b9fd446f1725adf923d20","kubernetes.io/config.seen":"2024-02-14T01:00:10.544688185Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a5c07c1af086efc1699569ac480ad9a53352042db016a74db405a9ebdcd9ce95","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a5c07c1af086efc1699569ac480ad9a53352042db016a74db405a9ebdcd9ce95/userdata","rootfs":"/var/lib/containers/storage/overlay/06b2ad1508946ac8cbd8c393e33c2d8d9c242297502c71db885f1d8ac134ade7/merged",
"created":"2024-02-14T01:00:34.012173257Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"14d1717c","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"14d1717c\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a5c07c1af086efc1699569ac480ad9a53352042db016a74db405a9ebdcd9ce95","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:00:33.977716095Z","io.kubernetes.cri-o.Image":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.ImageName":"registry.k8s.io
/kube-proxy:v1.28.4","io.kubernetes.cri-o.ImageRef":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-bnbc8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c162e76e-4f54-45bb-908d-b3e05565dcad\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-bnbc8_c162e76e-4f54-45bb-908d-b3e05565dcad/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/06b2ad1508946ac8cbd8c393e33c2d8d9c242297502c71db885f1d8ac134ade7/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-bnbc8_kube-system_c162e76e-4f54-45bb-908d-b3e05565dcad_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3b25c68a7bf1193bcecf8715eb16c0f700a1cbb92a5fb2b8af867aba2ebae310/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3b25c68a7bf1193bcecf8
715eb16c0f700a1cbb92a5fb2b8af867aba2ebae310","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-bnbc8_kube-system_c162e76e-4f54-45bb-908d-b3e05565dcad_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c162e76e-4f54-45bb-908d-b3e05565dcad/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c162e76e-4f54-45bb-908d-b3e05565dcad/containers/kube-proxy/9dfc19b4\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"contai
ner_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/c162e76e-4f54-45bb-908d-b3e05565dcad/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/c162e76e-4f54-45bb-908d-b3e05565dcad/volumes/kubernetes.io~projected/kube-api-access-phvgl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-bnbc8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c162e76e-4f54-45bb-908d-b3e05565dcad","kubernetes.io/config.seen":"2024-02-14T01:00:32.132076499Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a7101836fc16c98778a0190b45cb702f19b3b5c395ecd6efdcea9736cdcb0acd","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a7101836fc16c98778a0190b45cb702f19b3b5c395ecd6efd
cea9736cdcb0acd/userdata","rootfs":"/var/lib/containers/storage/overlay/ca6f69d1d06c7123663de92a798f40411859e51b902d2f4ea8e2c8883ea8e588/merged","created":"2024-02-14T01:00:11.201262723Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9a968e67","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9a968e67\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a7101836fc16c98778a0190b45cb702f19b3b5c395ecd6efdcea9736cdcb0acd","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:00:11.130222711Z","io.
kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-644788\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8b240531e6f3c5666f0f30130069f63b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-644788_8b240531e6f3c5666f0f30130069f63b/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ca6f69d1d06c7123663de92a798f40411859e51b902d2f4ea8e2c8883ea8e588/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-644788_kube-system_8b240531e6f3c5666f0f30130069f63b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/706bddbd852f853ffe7d7e45d92b902abe6ebe51d
8d25ccf07eaedacd7bdfd39/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"706bddbd852f853ffe7d7e45d92b902abe6ebe51d8d25ccf07eaedacd7bdfd39","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-644788_kube-system_8b240531e6f3c5666f0f30130069f63b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8b240531e6f3c5666f0f30130069f63b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8b240531e6f3c5666f0f30130069f63b/containers/etcd/c97ce5d3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/e
tcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-644788","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8b240531e6f3c5666f0f30130069f63b","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"8b240531e6f3c5666f0f30130069f63b","kubernetes.io/config.seen":"2024-02-14T01:00:10.544681646Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aa8f37be653fa5260cc7463da7529cb866e0ab511c6be3899b775f01798b571e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/aa8f37be653fa5260cc7463da7529cb866e0ab511c6be3899b775f01798b571e/userdata","rootfs":"/var/lib/containers/storage/overlay/e92b4e4be00a3867bbc6a8829644176792951867b94b9dd25e47a488954d4f2f/merged","created":"2024-02-14T01:00:11.265103128Z","annotations":{"io.container.manager"
:"cri-o","io.kubernetes.container.hash":"b60ddd3e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b60ddd3e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"aa8f37be653fa5260cc7463da7529cb866e0ab511c6be3899b775f01798b571e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:00:11.184944909Z","io.kubernetes.cri-o.Image":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri-o.ImageRef":"9
961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-644788\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c67ae50337e937be552b2a1bf295567b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-644788_c67ae50337e937be552b2a1bf295567b/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e92b4e4be00a3867bbc6a8829644176792951867b94b9dd25e47a488954d4f2f/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-644788_kube-system_c67ae50337e937be552b2a1bf295567b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e28b841896c87b742d465e0c38842422b94e4342935f77c080872fda91c81d33/userdata/resolv.conf","io.kube
rnetes.cri-o.SandboxID":"e28b841896c87b742d465e0c38842422b94e4342935f77c080872fda91c81d33","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-644788_kube-system_c67ae50337e937be552b2a1bf295567b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c67ae50337e937be552b2a1bf295567b/containers/kube-controller-manager/e678862e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c67ae50337e937be552b2a1bf295567b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/cert
s\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-644788","io.kubernete
s.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c67ae50337e937be552b2a1bf295567b","kubernetes.io/config.hash":"c67ae50337e937be552b2a1bf295567b","kubernetes.io/config.seen":"2024-02-14T01:00:10.544689695Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314/userdata","rootfs":"/var/lib/containers/storage/overlay/0765bd5dcdc8815926273f0dc47020c622018728cde3457a7ac894a72e5607f7/merged","created":"2024-02-14T01:01:13.094109906Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b60ddd3e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminat
ionMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b60ddd3e\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:01:12.916419497Z","io.kubernetes.cri-o.Image":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri-o.ImageRef":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-644788\",\"io.kubernetes.pod.namespace\"
:\"kube-system\",\"io.kubernetes.pod.uid\":\"c67ae50337e937be552b2a1bf295567b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-644788_c67ae50337e937be552b2a1bf295567b/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0765bd5dcdc8815926273f0dc47020c622018728cde3457a7ac894a72e5607f7/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-644788_kube-system_c67ae50337e937be552b2a1bf295567b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e28b841896c87b742d465e0c38842422b94e4342935f77c080872fda91c81d33/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e28b841896c87b742d465e0c38842422b94e4342935f77c080872fda91c81d33","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-644788_kube-system_c67ae50337e937be552b2a1bf295567b_0","io.kubernetes.cri-o.Seccom
pProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c67ae50337e937be552b2a1bf295567b/containers/kube-controller-manager/da4a4b2f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c67ae50337e937be552b2a1bf295567b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel
\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-644788","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c67ae50337e937be552b2a1bf295567b","kubernetes.io/config.hash":"c67ae50337e937be552b2a1bf295567b","kubernetes.io/config.seen":"2024-02-14T0
1:00:10.544689695Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7/userdata","rootfs":"/var/lib/containers/storage/overlay/0fa2dec7415a3081e5a48eb643a5f386241ccae86871f61764c323b9a9a73905/merged","created":"2024-02-14T01:01:13.328097961Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"82945af1","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.
kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"82945af1\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:01:13.139841313Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1
.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-blr8m\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"79232acc-f48d-4b46-8c04-17e044441e02\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-blr8m_79232acc-f48d-4b46-8c04-17e044441e02/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0fa2dec7415a3081e5a48eb643a5f386241ccae86871f61764c323b9a9a73905/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-blr8m_kube-system_79232acc-f48d-4b46-8c04-17e044441e02_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b9c06cd1598992ebca09f7225fc3aca16471488793bbee8edc87dc60b0ce23fe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b9c06cd15
98992ebca09f7225fc3aca16471488793bbee8edc87dc60b0ce23fe","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-blr8m_kube-system_79232acc-f48d-4b46-8c04-17e044441e02_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/79232acc-f48d-4b46-8c04-17e044441e02/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/79232acc-f48d-4b46-8c04-17e044441e02/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/79232acc-f48d-4b46-8c04-17e044441e02/containers/coredns/5952a9e4\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kub
ernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/79232acc-f48d-4b46-8c04-17e044441e02/volumes/kubernetes.io~projected/kube-api-access-q7f8g\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-blr8m","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"79232acc-f48d-4b46-8c04-17e044441e02","kubernetes.io/config.seen":"2024-02-14T01:01:03.000499019Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0214 01:01:25.663594  631285 cri.go:126] list returned 14 containers
	I0214 01:01:25.663614  631285 cri.go:129] container: {ID:090931a4d720e9f5af76785cb50e723699a5515967278e2889721ff0e4b3d96d Status:stopped}
	I0214 01:01:25.663634  631285 cri.go:135] skipping {090931a4d720e9f5af76785cb50e723699a5515967278e2889721ff0e4b3d96d stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663648  631285 cri.go:129] container: {ID:1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556 Status:stopped}
	I0214 01:01:25.663656  631285 cri.go:135] skipping {1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556 stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663667  631285 cri.go:129] container: {ID:23586c9364439c469cd850ca3466e08e9f53ed48c3c24d45a852824fdabb3c4d Status:stopped}
	I0214 01:01:25.663675  631285 cri.go:135] skipping {23586c9364439c469cd850ca3466e08e9f53ed48c3c24d45a852824fdabb3c4d stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663684  631285 cri.go:129] container: {ID:378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15 Status:stopped}
	I0214 01:01:25.663691  631285 cri.go:135] skipping {378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15 stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663702  631285 cri.go:129] container: {ID:61c66c21d744ad1eb60ed7f7766f6a9801eac623e67e0049153f6d2eb45012ba Status:stopped}
	I0214 01:01:25.663709  631285 cri.go:135] skipping {61c66c21d744ad1eb60ed7f7766f6a9801eac623e67e0049153f6d2eb45012ba stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663717  631285 cri.go:129] container: {ID:6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257 Status:stopped}
	I0214 01:01:25.663728  631285 cri.go:135] skipping {6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257 stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663734  631285 cri.go:129] container: {ID:827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6 Status:stopped}
	I0214 01:01:25.663741  631285 cri.go:135] skipping {827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6 stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663751  631285 cri.go:129] container: {ID:8b9bc52559cbf14d19f7da83d1cb4837614b5904b896b091f82c459988dcd82e Status:stopped}
	I0214 01:01:25.663758  631285 cri.go:135] skipping {8b9bc52559cbf14d19f7da83d1cb4837614b5904b896b091f82c459988dcd82e stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663770  631285 cri.go:129] container: {ID:9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc Status:stopped}
	I0214 01:01:25.663777  631285 cri.go:135] skipping {9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663787  631285 cri.go:129] container: {ID:a5c07c1af086efc1699569ac480ad9a53352042db016a74db405a9ebdcd9ce95 Status:stopped}
	I0214 01:01:25.663794  631285 cri.go:135] skipping {a5c07c1af086efc1699569ac480ad9a53352042db016a74db405a9ebdcd9ce95 stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663802  631285 cri.go:129] container: {ID:a7101836fc16c98778a0190b45cb702f19b3b5c395ecd6efdcea9736cdcb0acd Status:stopped}
	I0214 01:01:25.663809  631285 cri.go:135] skipping {a7101836fc16c98778a0190b45cb702f19b3b5c395ecd6efdcea9736cdcb0acd stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663819  631285 cri.go:129] container: {ID:aa8f37be653fa5260cc7463da7529cb866e0ab511c6be3899b775f01798b571e Status:stopped}
	I0214 01:01:25.663826  631285 cri.go:135] skipping {aa8f37be653fa5260cc7463da7529cb866e0ab511c6be3899b775f01798b571e stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663832  631285 cri.go:129] container: {ID:b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314 Status:stopped}
	I0214 01:01:25.663840  631285 cri.go:135] skipping {b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314 stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663849  631285 cri.go:129] container: {ID:fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7 Status:stopped}
	I0214 01:01:25.663856  631285 cri.go:135] skipping {fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7 stopped}: state = "stopped", want "paused"
	I0214 01:01:25.663922  631285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 01:01:25.674975  631285 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0214 01:01:25.674998  631285 kubeadm.go:636] restartCluster start
	I0214 01:01:25.675066  631285 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0214 01:01:25.688641  631285 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:25.689279  631285 kubeconfig.go:92] found "pause-644788" server: "https://192.168.76.2:8443"
	I0214 01:01:25.690290  631285 kapi.go:59] client config for pause-644788: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/client.key", CAFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 01:01:25.691079  631285 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0214 01:01:25.703446  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:25.703523  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:25.715050  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:26.204037  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:26.204123  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:26.223330  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:26.703962  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:26.704066  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:26.714999  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:27.203616  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:27.203733  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:27.216904  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:27.703391  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:27.703481  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:27.713958  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:28.203131  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:28.203247  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:28.213760  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:28.703160  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:28.703240  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:28.713293  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:29.203950  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:29.204034  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:29.213914  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:29.703089  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:29.703177  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:29.714033  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:30.203717  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:30.203824  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:30.217554  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:30.703300  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:30.703374  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:30.719266  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:31.203088  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:31.203168  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:31.215005  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:31.703572  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:31.703651  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:31.714777  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:32.203158  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:32.203235  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:32.213803  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:32.703202  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:32.703307  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:32.714941  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:33.203497  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:33.203577  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:33.214499  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:33.703101  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:33.703196  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:33.713093  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:34.203728  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:34.203814  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:34.214790  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:34.703284  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:34.703355  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:34.715386  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:35.204022  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:35.204142  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:35.215011  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:35.703772  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:35.703843  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:35.719252  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:35.719279  631285 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0214 01:01:35.719288  631285 kubeadm.go:1135] stopping kube-system containers ...
	I0214 01:01:35.719297  631285 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0214 01:01:35.719352  631285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 01:01:35.777393  631285 cri.go:89] found id: "fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7"
	I0214 01:01:35.777414  631285 cri.go:89] found id: "1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556"
	I0214 01:01:35.777420  631285 cri.go:89] found id: "9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc"
	I0214 01:01:35.777425  631285 cri.go:89] found id: "6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257"
	I0214 01:01:35.777430  631285 cri.go:89] found id: "827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6"
	I0214 01:01:35.777434  631285 cri.go:89] found id: "b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314"
	I0214 01:01:35.777439  631285 cri.go:89] found id: "378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15"
	I0214 01:01:35.777443  631285 cri.go:89] found id: ""
	I0214 01:01:35.777448  631285 cri.go:234] Stopping containers: [fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7 1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556 9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc 6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257 827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6 b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314 378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15]
	I0214 01:01:35.777501  631285 ssh_runner.go:195] Run: which crictl
	I0214 01:01:35.781713  631285 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7 1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556 9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc 6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257 827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6 b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314 378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15
	I0214 01:01:35.866617  631285 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0214 01:01:35.957663  631285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 01:01:35.966989  631285 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 14 01:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 14 01:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Feb 14 01:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 14 01:00 /etc/kubernetes/scheduler.conf
	
	I0214 01:01:35.967088  631285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 01:01:35.976265  631285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 01:01:35.984987  631285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 01:01:35.994377  631285 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:35.994456  631285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 01:01:36.004149  631285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 01:01:36.017133  631285 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:36.017245  631285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 01:01:36.028439  631285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 01:01:36.039723  631285 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0214 01:01:36.039751  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:36.108921  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:37.437487  631285 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.328532392s)
	I0214 01:01:37.437518  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:37.622884  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:37.715539  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:37.805030  631285 api_server.go:52] waiting for apiserver process to appear ...
	I0214 01:01:37.805116  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:38.306139  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:38.805918  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:38.846408  631285 api_server.go:72] duration metric: took 1.041379823s to wait for apiserver process to appear ...
	I0214 01:01:38.846430  631285 api_server.go:88] waiting for apiserver healthz status ...
	I0214 01:01:38.846449  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:42.890876  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0214 01:01:42.890914  631285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0214 01:01:42.890930  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:43.000857  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0214 01:01:43.000890  631285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0214 01:01:43.347303  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:43.357331  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0214 01:01:43.357362  631285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0214 01:01:43.846575  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:43.859369  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0214 01:01:43.859400  631285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0214 01:01:44.346572  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:44.357312  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0214 01:01:44.372759  631285 api_server.go:141] control plane version: v1.28.4
	I0214 01:01:44.372793  631285 api_server.go:131] duration metric: took 5.526354908s to wait for apiserver health ...
	I0214 01:01:44.372803  631285 cni.go:84] Creating CNI manager for ""
	I0214 01:01:44.372813  631285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 01:01:44.375361  631285 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 01:01:44.377383  631285 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 01:01:44.381175  631285 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0214 01:01:44.381195  631285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0214 01:01:44.399370  631285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 01:01:45.311589  631285 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 01:01:45.321589  631285 system_pods.go:59] 7 kube-system pods found
	I0214 01:01:45.321637  631285 system_pods.go:61] "coredns-5dd5756b68-blr8m" [79232acc-f48d-4b46-8c04-17e044441e02] Running
	I0214 01:01:45.321648  631285 system_pods.go:61] "etcd-pause-644788" [1fe50aac-82bf-4b34-a62a-de740c19f8a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 01:01:45.321827  631285 system_pods.go:61] "kindnet-nxl78" [2cd1ad76-088c-4810-9812-5fa72cc11eab] Running
	I0214 01:01:45.321840  631285 system_pods.go:61] "kube-apiserver-pause-644788" [f950e242-985f-4426-82e0-ca23a4b7b158] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 01:01:45.321861  631285 system_pods.go:61] "kube-controller-manager-pause-644788" [db5b3438-52ba-451c-b8a9-104888291481] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 01:01:45.321886  631285 system_pods.go:61] "kube-proxy-bnbc8" [c162e76e-4f54-45bb-908d-b3e05565dcad] Running
	I0214 01:01:45.321904  631285 system_pods.go:61] "kube-scheduler-pause-644788" [e2626363-c6c6-4342-b4ab-d6a6e90d4911] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 01:01:45.321915  631285 system_pods.go:74] duration metric: took 10.301354ms to wait for pod list to return data ...
	I0214 01:01:45.321933  631285 node_conditions.go:102] verifying NodePressure condition ...
	I0214 01:01:45.328779  631285 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 01:01:45.328820  631285 node_conditions.go:123] node cpu capacity is 2
	I0214 01:01:45.328833  631285 node_conditions.go:105] duration metric: took 6.894ms to run NodePressure ...
	I0214 01:01:45.328886  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:45.533970  631285 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0214 01:01:45.539386  631285 kubeadm.go:787] kubelet initialised
	I0214 01:01:45.539409  631285 kubeadm.go:788] duration metric: took 5.411467ms waiting for restarted kubelet to initialise ...
	I0214 01:01:45.539423  631285 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 01:01:45.548160  631285 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:45.554543  631285 pod_ready.go:92] pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:45.554571  631285 pod_ready.go:81] duration metric: took 6.37512ms waiting for pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:45.554584  631285 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:47.561289  631285 pod_ready.go:102] pod "etcd-pause-644788" in "kube-system" namespace has status "Ready":"False"
	I0214 01:01:49.563588  631285 pod_ready.go:102] pod "etcd-pause-644788" in "kube-system" namespace has status "Ready":"False"
	I0214 01:01:50.562398  631285 pod_ready.go:92] pod "etcd-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:50.562461  631285 pod_ready.go:81] duration metric: took 5.007836503s waiting for pod "etcd-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:50.562483  631285 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:52.574372  631285 pod_ready.go:102] pod "kube-apiserver-pause-644788" in "kube-system" namespace has status "Ready":"False"
	I0214 01:01:55.070021  631285 pod_ready.go:102] pod "kube-apiserver-pause-644788" in "kube-system" namespace has status "Ready":"False"
	I0214 01:01:56.081497  631285 pod_ready.go:92] pod "kube-apiserver-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.081525  631285 pod_ready.go:81] duration metric: took 5.519033491s waiting for pod "kube-apiserver-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.081537  631285 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.102776  631285 pod_ready.go:92] pod "kube-controller-manager-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.102809  631285 pod_ready.go:81] duration metric: took 21.255224ms waiting for pod "kube-controller-manager-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.102824  631285 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bnbc8" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.116577  631285 pod_ready.go:92] pod "kube-proxy-bnbc8" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.116602  631285 pod_ready.go:81] duration metric: took 13.771099ms waiting for pod "kube-proxy-bnbc8" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.116614  631285 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.129681  631285 pod_ready.go:92] pod "kube-scheduler-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.129714  631285 pod_ready.go:81] duration metric: took 13.083301ms waiting for pod "kube-scheduler-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.129754  631285 pod_ready.go:38] duration metric: took 10.590316695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 01:01:56.129784  631285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 01:01:56.150819  631285 ops.go:34] apiserver oom_adj: -16
	I0214 01:01:56.150899  631285 kubeadm.go:640] restartCluster took 30.475893583s
	I0214 01:01:56.150924  631285 kubeadm.go:406] StartCluster complete in 30.621349297s
	I0214 01:01:56.150972  631285 settings.go:142] acquiring lock: {Name:mk6da46f5cb0f714c2fcf3244fbf0dfa768ab578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 01:01:56.151054  631285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 01:01:56.152049  631285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/kubeconfig: {Name:mke09ed5dbaa4240bee61fddd1ec0468d82bdfbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 01:01:56.153023  631285 kapi.go:59] client config for pause-644788: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/client.key", CAFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 01:01:56.153267  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 01:01:56.153265  631285 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0214 01:01:56.156139  631285 out.go:177] * Enabled addons: 
	I0214 01:01:56.153515  631285 config.go:182] Loaded profile config "pause-644788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 01:01:56.158052  631285 addons.go:505] enable addons completed in 4.7904ms: enabled=[]
	I0214 01:01:56.167103  631285 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-644788" context rescaled to 1 replicas
	I0214 01:01:56.167246  631285 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 01:01:56.169336  631285 out.go:177] * Verifying Kubernetes components...
	I0214 01:01:56.172439  631285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 01:01:56.327490  631285 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0214 01:01:56.327590  631285 node_ready.go:35] waiting up to 6m0s for node "pause-644788" to be "Ready" ...
	I0214 01:01:56.330767  631285 node_ready.go:49] node "pause-644788" has status "Ready":"True"
	I0214 01:01:56.330827  631285 node_ready.go:38] duration metric: took 3.197074ms waiting for node "pause-644788" to be "Ready" ...
	I0214 01:01:56.330860  631285 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 01:01:56.337299  631285 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.470054  631285 pod_ready.go:92] pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.470122  631285 pod_ready.go:81] duration metric: took 132.743373ms waiting for pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.470149  631285 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.866876  631285 pod_ready.go:92] pod "etcd-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.866949  631285 pod_ready.go:81] duration metric: took 396.777806ms waiting for pod "etcd-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.866979  631285 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:57.266075  631285 pod_ready.go:92] pod "kube-apiserver-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:57.266100  631285 pod_ready.go:81] duration metric: took 399.100268ms waiting for pod "kube-apiserver-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:57.266117  631285 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:57.667148  631285 pod_ready.go:92] pod "kube-controller-manager-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:57.667175  631285 pod_ready.go:81] duration metric: took 401.050048ms waiting for pod "kube-controller-manager-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:57.667187  631285 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bnbc8" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:58.067490  631285 pod_ready.go:92] pod "kube-proxy-bnbc8" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:58.067517  631285 pod_ready.go:81] duration metric: took 400.321782ms waiting for pod "kube-proxy-bnbc8" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:58.067529  631285 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:58.466140  631285 pod_ready.go:92] pod "kube-scheduler-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:58.466167  631285 pod_ready.go:81] duration metric: took 398.630511ms waiting for pod "kube-scheduler-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:58.466178  631285 pod_ready.go:38] duration metric: took 2.135292649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 01:01:58.466192  631285 api_server.go:52] waiting for apiserver process to appear ...
	I0214 01:01:58.466258  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:58.478074  631285 api_server.go:72] duration metric: took 2.310779349s to wait for apiserver process to appear ...
	I0214 01:01:58.478101  631285 api_server.go:88] waiting for apiserver healthz status ...
	I0214 01:01:58.478124  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:58.487582  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0214 01:01:58.489462  631285 api_server.go:141] control plane version: v1.28.4
	I0214 01:01:58.489483  631285 api_server.go:131] duration metric: took 11.375455ms to wait for apiserver health ...
	I0214 01:01:58.489491  631285 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 01:01:58.669807  631285 system_pods.go:59] 7 kube-system pods found
	I0214 01:01:58.669840  631285 system_pods.go:61] "coredns-5dd5756b68-blr8m" [79232acc-f48d-4b46-8c04-17e044441e02] Running
	I0214 01:01:58.669847  631285 system_pods.go:61] "etcd-pause-644788" [1fe50aac-82bf-4b34-a62a-de740c19f8a0] Running
	I0214 01:01:58.669853  631285 system_pods.go:61] "kindnet-nxl78" [2cd1ad76-088c-4810-9812-5fa72cc11eab] Running
	I0214 01:01:58.669858  631285 system_pods.go:61] "kube-apiserver-pause-644788" [f950e242-985f-4426-82e0-ca23a4b7b158] Running
	I0214 01:01:58.669864  631285 system_pods.go:61] "kube-controller-manager-pause-644788" [db5b3438-52ba-451c-b8a9-104888291481] Running
	I0214 01:01:58.669870  631285 system_pods.go:61] "kube-proxy-bnbc8" [c162e76e-4f54-45bb-908d-b3e05565dcad] Running
	I0214 01:01:58.669875  631285 system_pods.go:61] "kube-scheduler-pause-644788" [e2626363-c6c6-4342-b4ab-d6a6e90d4911] Running
	I0214 01:01:58.669881  631285 system_pods.go:74] duration metric: took 180.385029ms to wait for pod list to return data ...
	I0214 01:01:58.669895  631285 default_sa.go:34] waiting for default service account to be created ...
	I0214 01:01:58.866480  631285 default_sa.go:45] found service account: "default"
	I0214 01:01:58.866505  631285 default_sa.go:55] duration metric: took 196.598335ms for default service account to be created ...
	I0214 01:01:58.866516  631285 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 01:01:59.069051  631285 system_pods.go:86] 7 kube-system pods found
	I0214 01:01:59.069084  631285 system_pods.go:89] "coredns-5dd5756b68-blr8m" [79232acc-f48d-4b46-8c04-17e044441e02] Running
	I0214 01:01:59.069092  631285 system_pods.go:89] "etcd-pause-644788" [1fe50aac-82bf-4b34-a62a-de740c19f8a0] Running
	I0214 01:01:59.069124  631285 system_pods.go:89] "kindnet-nxl78" [2cd1ad76-088c-4810-9812-5fa72cc11eab] Running
	I0214 01:01:59.069133  631285 system_pods.go:89] "kube-apiserver-pause-644788" [f950e242-985f-4426-82e0-ca23a4b7b158] Running
	I0214 01:01:59.069139  631285 system_pods.go:89] "kube-controller-manager-pause-644788" [db5b3438-52ba-451c-b8a9-104888291481] Running
	I0214 01:01:59.069150  631285 system_pods.go:89] "kube-proxy-bnbc8" [c162e76e-4f54-45bb-908d-b3e05565dcad] Running
	I0214 01:01:59.069155  631285 system_pods.go:89] "kube-scheduler-pause-644788" [e2626363-c6c6-4342-b4ab-d6a6e90d4911] Running
	I0214 01:01:59.069169  631285 system_pods.go:126] duration metric: took 202.646728ms to wait for k8s-apps to be running ...
	I0214 01:01:59.069176  631285 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 01:01:59.069262  631285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 01:01:59.081382  631285 system_svc.go:56] duration metric: took 12.195224ms WaitForService to wait for kubelet.
	I0214 01:01:59.081406  631285 kubeadm.go:581] duration metric: took 2.914120041s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0214 01:01:59.081426  631285 node_conditions.go:102] verifying NodePressure condition ...
	I0214 01:01:59.266169  631285 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 01:01:59.266200  631285 node_conditions.go:123] node cpu capacity is 2
	I0214 01:01:59.266213  631285 node_conditions.go:105] duration metric: took 184.782389ms to run NodePressure ...
	I0214 01:01:59.266225  631285 start.go:228] waiting for startup goroutines ...
	I0214 01:01:59.266232  631285 start.go:233] waiting for cluster config update ...
	I0214 01:01:59.266239  631285 start.go:242] writing updated cluster config ...
	I0214 01:01:59.266885  631285 ssh_runner.go:195] Run: rm -f paused
	I0214 01:01:59.353523  631285 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0214 01:01:59.356670  631285 out.go:177] * Done! kubectl is now configured to use "pause-644788" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-644788
helpers_test.go:235: (dbg) docker inspect pause-644788:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d",
	        "Created": "2024-02-14T00:59:53.316471425Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 627442,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T00:59:53.622272204Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d/hostname",
	        "HostsPath": "/var/lib/docker/containers/184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d/hosts",
	        "LogPath": "/var/lib/docker/containers/184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d/184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d-json.log",
	        "Name": "/pause-644788",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-644788:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-644788",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/31376664be10f046ea4364990762dbaf8824987595c2779330a9c8db67466de4-init/diff:/var/lib/docker/overlay2/6bce6236d7ba68734b2ab000b848b0bb40e1e541964b0b25c50d016c8f0ef97c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31376664be10f046ea4364990762dbaf8824987595c2779330a9c8db67466de4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31376664be10f046ea4364990762dbaf8824987595c2779330a9c8db67466de4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31376664be10f046ea4364990762dbaf8824987595c2779330a9c8db67466de4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-644788",
	                "Source": "/var/lib/docker/volumes/pause-644788/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-644788",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-644788",
	                "name.minikube.sigs.k8s.io": "pause-644788",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b10e00d4e60323a1a1db11de843012b548f239a775e0297c0b14a363bbdf5e8e",
	            "SandboxKey": "/var/run/docker/netns/b10e00d4e603",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33587"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33586"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33583"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33585"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33584"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-644788": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "184d19e68729",
	                        "pause-644788"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "600f44a1766075a44606adf3483b14b1514603883f2208372d3926330fc0c99f",
	                    "EndpointID": "657d781cfefd0580d9b89cca0e965c9023381a9795b1c43f8db5a7010c3138f1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "pause-644788",
	                        "184d19e68729"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-644788 -n pause-644788
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-644788 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-644788 logs -n 25: (2.689324507s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-564237            | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:55 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-564237            | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:55 UTC |
	| start   | -p NoKubernetes-564237            | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:55 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-564237 sudo       | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-564237            | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:55 UTC |
	| start   | -p NoKubernetes-564237            | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:55 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-564237 sudo       | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-564237            | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:55 UTC |
	| start   | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:56 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-212863         | missing-upgrade-212863    | jenkins | v1.32.0 | 14 Feb 24 00:56 UTC | 14 Feb 24 00:57 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 00:56 UTC | 14 Feb 24 00:56 UTC |
	| start   | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 00:56 UTC | 14 Feb 24 01:01 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-212863         | missing-upgrade-212863    | jenkins | v1.32.0 | 14 Feb 24 00:57 UTC | 14 Feb 24 00:57 UTC |
	| start   | -p stopped-upgrade-055750         | minikube                  | jenkins | v1.26.0 | 14 Feb 24 00:57 UTC | 14 Feb 24 00:57 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --vm-driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-055750 stop       | minikube                  | jenkins | v1.26.0 | 14 Feb 24 00:57 UTC | 14 Feb 24 00:57 UTC |
	| start   | -p stopped-upgrade-055750         | stopped-upgrade-055750    | jenkins | v1.32.0 | 14 Feb 24 00:57 UTC | 14 Feb 24 00:58 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-055750         | stopped-upgrade-055750    | jenkins | v1.32.0 | 14 Feb 24 00:58 UTC | 14 Feb 24 00:58 UTC |
	| start   | -p running-upgrade-905465         | minikube                  | jenkins | v1.26.0 | 14 Feb 24 00:58 UTC | 14 Feb 24 00:59 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --vm-driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p running-upgrade-905465         | running-upgrade-905465    | jenkins | v1.32.0 | 14 Feb 24 00:59 UTC | 14 Feb 24 00:59 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-905465         | running-upgrade-905465    | jenkins | v1.32.0 | 14 Feb 24 00:59 UTC | 14 Feb 24 00:59 UTC |
	| start   | -p pause-644788 --memory=2048     | pause-644788              | jenkins | v1.32.0 | 14 Feb 24 00:59 UTC | 14 Feb 24 01:01 UTC |
	|         | --install-addons=false            |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker        |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p pause-644788                   | pause-644788              | jenkins | v1.32.0 | 14 Feb 24 01:01 UTC | 14 Feb 24 01:01 UTC |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 01:01 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 01:01 UTC | 14 Feb 24 01:01 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 01:01 UTC |                     |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 01:01:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 01:01:31.939591  634014 out.go:291] Setting OutFile to fd 1 ...
	I0214 01:01:31.939793  634014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 01:01:31.939818  634014 out.go:304] Setting ErrFile to fd 2...
	I0214 01:01:31.939839  634014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 01:01:31.940141  634014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 01:01:31.940605  634014 out.go:298] Setting JSON to false
	I0214 01:01:31.941650  634014 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13435,"bootTime":1707859057,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0214 01:01:31.941784  634014 start.go:138] virtualization:  
	I0214 01:01:31.944568  634014 out.go:177] * [kubernetes-upgrade-727193] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 01:01:31.948250  634014 out.go:177]   - MINIKUBE_LOCATION=18169
	I0214 01:01:31.948324  634014 notify.go:220] Checking for updates...
	I0214 01:01:31.951453  634014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 01:01:31.954514  634014 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 01:01:31.956576  634014 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	I0214 01:01:31.959049  634014 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 01:01:31.961506  634014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 01:01:31.964800  634014 config.go:182] Loaded profile config "kubernetes-upgrade-727193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0214 01:01:31.965402  634014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 01:01:31.993452  634014 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 01:01:31.993573  634014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 01:01:32.079736  634014 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-14 01:01:32.068958265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 01:01:32.079835  634014 docker.go:295] overlay module found
	I0214 01:01:32.082412  634014 out.go:177] * Using the docker driver based on existing profile
	I0214 01:01:32.084734  634014 start.go:298] selected driver: docker
	I0214 01:01:32.084756  634014 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-727193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-727193 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 01:01:32.084860  634014 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 01:01:32.085523  634014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 01:01:32.157351  634014 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-14 01:01:32.148553025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 01:01:32.157755  634014 cni.go:84] Creating CNI manager for ""
	I0214 01:01:32.157776  634014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 01:01:32.157792  634014 start_flags.go:321] config:
	{Name:kubernetes-upgrade-727193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-727193 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 01:01:32.161283  634014 out.go:177] * Starting control plane node kubernetes-upgrade-727193 in cluster kubernetes-upgrade-727193
	I0214 01:01:32.163064  634014 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 01:01:32.165603  634014 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 01:01:32.168650  634014 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0214 01:01:32.168823  634014 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 01:01:32.169655  634014 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0214 01:01:32.169672  634014 cache.go:56] Caching tarball of preloaded images
	I0214 01:01:32.169897  634014 preload.go:174] Found /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0214 01:01:32.169909  634014 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0214 01:01:32.170030  634014 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/config.json ...
	I0214 01:01:32.186300  634014 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0214 01:01:32.186328  634014 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0214 01:01:32.186351  634014 cache.go:194] Successfully downloaded all kic artifacts
	I0214 01:01:32.186380  634014 start.go:365] acquiring machines lock for kubernetes-upgrade-727193: {Name:mk8427b9df699bd9d1f7c77b7b05723dbff293cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 01:01:32.186498  634014 start.go:369] acquired machines lock for "kubernetes-upgrade-727193" in 86.498µs
	I0214 01:01:32.186531  634014 start.go:96] Skipping create...Using existing machine configuration
	I0214 01:01:32.186543  634014 fix.go:54] fixHost starting: 
	I0214 01:01:32.186815  634014 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-727193 --format={{.State.Status}}
	I0214 01:01:32.202106  634014 fix.go:102] recreateIfNeeded on kubernetes-upgrade-727193: state=Running err=<nil>
	W0214 01:01:32.202138  634014 fix.go:128] unexpected machine state, will restart: <nil>
	I0214 01:01:32.204708  634014 out.go:177] * Updating the running docker "kubernetes-upgrade-727193" container ...
	I0214 01:01:31.203088  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:31.203168  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:31.215005  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:31.703572  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:31.703651  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:31.714777  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:32.203158  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:32.203235  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:32.213803  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:32.703202  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:32.703307  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:32.714941  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:33.203497  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:33.203577  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:33.214499  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:33.703101  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:33.703196  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:33.713093  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:34.203728  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:34.203814  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:34.214790  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:34.703284  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:34.703355  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:34.715386  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:35.204022  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:35.204142  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:35.215011  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:35.703772  631285 api_server.go:166] Checking apiserver status ...
	I0214 01:01:35.703843  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:35.719252  631285 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:35.719279  631285 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0214 01:01:35.719288  631285 kubeadm.go:1135] stopping kube-system containers ...
	I0214 01:01:35.719297  631285 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0214 01:01:35.719352  631285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 01:01:32.210440  634014 machine.go:88] provisioning docker machine ...
	I0214 01:01:32.210473  634014 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-727193"
	I0214 01:01:32.210563  634014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-727193
	I0214 01:01:32.232947  634014 main.go:141] libmachine: Using SSH client type: native
	I0214 01:01:32.233383  634014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0214 01:01:32.233395  634014 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-727193 && echo "kubernetes-upgrade-727193" | sudo tee /etc/hostname
	I0214 01:01:32.378480  634014 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-727193
	
	I0214 01:01:32.378589  634014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-727193
	I0214 01:01:32.396755  634014 main.go:141] libmachine: Using SSH client type: native
	I0214 01:01:32.397160  634014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0214 01:01:32.397184  634014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-727193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-727193/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-727193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 01:01:32.529964  634014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 01:01:32.529993  634014 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18169-498689/.minikube CaCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18169-498689/.minikube}
	I0214 01:01:32.530013  634014 ubuntu.go:177] setting up certificates
	I0214 01:01:32.530023  634014 provision.go:83] configureAuth start
	I0214 01:01:32.530099  634014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-727193
	I0214 01:01:32.546680  634014 provision.go:138] copyHostCerts
	I0214 01:01:32.546741  634014 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-498689/.minikube/cert.pem, removing ...
	I0214 01:01:32.546750  634014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-498689/.minikube/cert.pem
	I0214 01:01:32.546828  634014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/cert.pem (1123 bytes)
	I0214 01:01:32.546943  634014 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-498689/.minikube/key.pem, removing ...
	I0214 01:01:32.546956  634014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-498689/.minikube/key.pem
	I0214 01:01:32.546983  634014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/key.pem (1675 bytes)
	I0214 01:01:32.547050  634014 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-498689/.minikube/ca.pem, removing ...
	I0214 01:01:32.547062  634014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-498689/.minikube/ca.pem
	I0214 01:01:32.547098  634014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18169-498689/.minikube/ca.pem (1078 bytes)
	I0214 01:01:32.547153  634014 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-727193 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-727193]
	I0214 01:01:33.034993  634014 provision.go:172] copyRemoteCerts
	I0214 01:01:33.035074  634014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 01:01:33.035119  634014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-727193
	I0214 01:01:33.054489  634014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/kubernetes-upgrade-727193/id_rsa Username:docker}
	I0214 01:01:33.150857  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 01:01:33.175762  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0214 01:01:33.201444  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0214 01:01:33.225855  634014 provision.go:86] duration metric: configureAuth took 695.813117ms
	I0214 01:01:33.225880  634014 ubuntu.go:193] setting minikube options for container-runtime
	I0214 01:01:33.226063  634014 config.go:182] Loaded profile config "kubernetes-upgrade-727193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0214 01:01:33.226169  634014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-727193
	I0214 01:01:33.243196  634014 main.go:141] libmachine: Using SSH client type: native
	I0214 01:01:33.243605  634014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0214 01:01:33.243625  634014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 01:01:33.714468  634014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 01:01:33.714489  634014 machine.go:91] provisioned docker machine in 1.504027033s
	I0214 01:01:33.714499  634014 start.go:300] post-start starting for "kubernetes-upgrade-727193" (driver="docker")
	I0214 01:01:33.714519  634014 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 01:01:33.714580  634014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 01:01:33.714664  634014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-727193
	I0214 01:01:33.735865  634014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/kubernetes-upgrade-727193/id_rsa Username:docker}
	I0214 01:01:33.831485  634014 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 01:01:33.834484  634014 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 01:01:33.834578  634014 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 01:01:33.834609  634014 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 01:01:33.834630  634014 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 01:01:33.834665  634014 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-498689/.minikube/addons for local assets ...
	I0214 01:01:33.834751  634014 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-498689/.minikube/files for local assets ...
	I0214 01:01:33.834901  634014 filesync.go:149] local asset: /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem -> 5040612.pem in /etc/ssl/certs
	I0214 01:01:33.835052  634014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 01:01:33.844243  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem --> /etc/ssl/certs/5040612.pem (1708 bytes)
	I0214 01:01:33.868682  634014 start.go:303] post-start completed in 154.166105ms
	I0214 01:01:33.868803  634014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 01:01:33.868858  634014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-727193
	I0214 01:01:33.884825  634014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/kubernetes-upgrade-727193/id_rsa Username:docker}
	I0214 01:01:33.974474  634014 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 01:01:33.978792  634014 fix.go:56] fixHost completed within 1.79224289s
	I0214 01:01:33.978819  634014 start.go:83] releasing machines lock for "kubernetes-upgrade-727193", held for 1.792306258s
	I0214 01:01:33.978894  634014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-727193
	I0214 01:01:33.994101  634014 ssh_runner.go:195] Run: cat /version.json
	I0214 01:01:33.994158  634014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-727193
	I0214 01:01:33.994431  634014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 01:01:33.994496  634014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-727193
	I0214 01:01:34.016853  634014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/kubernetes-upgrade-727193/id_rsa Username:docker}
	I0214 01:01:34.025551  634014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/kubernetes-upgrade-727193/id_rsa Username:docker}
	I0214 01:01:34.113215  634014 ssh_runner.go:195] Run: systemctl --version
	I0214 01:01:34.245697  634014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 01:01:34.438490  634014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 01:01:34.448968  634014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 01:01:34.480580  634014 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0214 01:01:34.480660  634014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 01:01:34.510655  634014 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0214 01:01:34.510683  634014 start.go:475] detecting cgroup driver to use...
	I0214 01:01:34.510717  634014 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 01:01:34.510767  634014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 01:01:34.544870  634014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 01:01:34.565158  634014 docker.go:217] disabling cri-docker service (if available) ...
	I0214 01:01:34.565239  634014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 01:01:34.588973  634014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 01:01:34.611381  634014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 01:01:34.784112  634014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 01:01:34.933847  634014 docker.go:233] disabling docker service ...
	I0214 01:01:34.933972  634014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 01:01:34.963434  634014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 01:01:34.995041  634014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 01:01:35.152556  634014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 01:01:35.321484  634014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 01:01:35.351100  634014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 01:01:35.379943  634014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0214 01:01:35.380082  634014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 01:01:35.416531  634014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 01:01:35.416653  634014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 01:01:35.447641  634014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 01:01:35.484542  634014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 01:01:35.518452  634014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 01:01:35.548630  634014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 01:01:35.577578  634014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 01:01:35.599761  634014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 01:01:35.789341  634014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 01:01:36.067146  634014 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 01:01:36.067265  634014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 01:01:36.073196  634014 start.go:543] Will wait 60s for crictl version
	I0214 01:01:36.073321  634014 ssh_runner.go:195] Run: which crictl
	I0214 01:01:36.077931  634014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 01:01:36.131171  634014 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0214 01:01:36.131294  634014 ssh_runner.go:195] Run: crio --version
	I0214 01:01:36.205346  634014 ssh_runner.go:195] Run: crio --version
	I0214 01:01:36.259613  634014 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.6 ...
	I0214 01:01:36.261626  634014 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-727193 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 01:01:36.287604  634014 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0214 01:01:36.299965  634014 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0214 01:01:36.300037  634014 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 01:01:36.396653  634014 crio.go:496] all images are preloaded for cri-o runtime.
	I0214 01:01:36.396680  634014 crio.go:415] Images already preloaded, skipping extraction
	I0214 01:01:36.396732  634014 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 01:01:36.459684  634014 crio.go:496] all images are preloaded for cri-o runtime.
	I0214 01:01:36.459710  634014 cache_images.go:84] Images are preloaded, skipping loading
	I0214 01:01:36.459796  634014 ssh_runner.go:195] Run: crio config
	I0214 01:01:36.529151  634014 cni.go:84] Creating CNI manager for ""
	I0214 01:01:36.529176  634014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 01:01:36.529224  634014 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 01:01:36.529246  634014 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-727193 NodeName:kubernetes-upgrade-727193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 01:01:36.529444  634014 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-727193"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 01:01:36.529553  634014 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-727193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-727193 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0214 01:01:36.529643  634014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0214 01:01:36.539658  634014 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 01:01:36.539752  634014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 01:01:36.549210  634014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (440 bytes)
	I0214 01:01:36.569388  634014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0214 01:01:36.589592  634014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
	I0214 01:01:36.609102  634014 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0214 01:01:36.613242  634014 certs.go:56] Setting up /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193 for IP: 192.168.67.2
	I0214 01:01:36.613299  634014 certs.go:190] acquiring lock for shared ca certs: {Name:mk24bda5a01a6d67ca318fbbda66875cef4a1a9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 01:01:36.613482  634014 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18169-498689/.minikube/ca.key
	I0214 01:01:36.613557  634014 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.key
	I0214 01:01:36.613657  634014 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/client.key
	I0214 01:01:36.613775  634014 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/apiserver.key.c7fa3a9e
	I0214 01:01:36.613863  634014 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/proxy-client.key
	I0214 01:01:36.614027  634014 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/504061.pem (1338 bytes)
	W0214 01:01:36.614083  634014 certs.go:433] ignoring /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/504061_empty.pem, impossibly tiny 0 bytes
	I0214 01:01:36.614101  634014 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca-key.pem (1679 bytes)
	I0214 01:01:36.614151  634014 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/ca.pem (1078 bytes)
	I0214 01:01:36.614211  634014 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/cert.pem (1123 bytes)
	I0214 01:01:36.614258  634014 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/certs/home/jenkins/minikube-integration/18169-498689/.minikube/certs/key.pem (1675 bytes)
	I0214 01:01:36.614338  634014 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem (1708 bytes)
	I0214 01:01:36.615072  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 01:01:36.649392  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 01:01:36.675669  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 01:01:36.702049  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 01:01:36.728264  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 01:01:36.754977  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 01:01:36.782866  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 01:01:36.808858  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0214 01:01:36.835821  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/ssl/certs/5040612.pem --> /usr/share/ca-certificates/5040612.pem (1708 bytes)
	I0214 01:01:36.866140  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 01:01:36.892594  634014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-498689/.minikube/certs/504061.pem --> /usr/share/ca-certificates/504061.pem (1338 bytes)
	I0214 01:01:36.919019  634014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 01:01:36.938556  634014 ssh_runner.go:195] Run: openssl version
	I0214 01:01:35.777393  631285 cri.go:89] found id: "fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7"
	I0214 01:01:35.777414  631285 cri.go:89] found id: "1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556"
	I0214 01:01:35.777420  631285 cri.go:89] found id: "9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc"
	I0214 01:01:35.777425  631285 cri.go:89] found id: "6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257"
	I0214 01:01:35.777430  631285 cri.go:89] found id: "827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6"
	I0214 01:01:35.777434  631285 cri.go:89] found id: "b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314"
	I0214 01:01:35.777439  631285 cri.go:89] found id: "378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15"
	I0214 01:01:35.777443  631285 cri.go:89] found id: ""
	I0214 01:01:35.777448  631285 cri.go:234] Stopping containers: [fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7 1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556 9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc 6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257 827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6 b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314 378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15]
	I0214 01:01:35.777501  631285 ssh_runner.go:195] Run: which crictl
	I0214 01:01:35.781713  631285 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7 1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556 9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc 6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257 827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6 b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314 378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15
	I0214 01:01:35.866617  631285 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0214 01:01:35.957663  631285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 01:01:35.966989  631285 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 14 01:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 14 01:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Feb 14 01:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 14 01:00 /etc/kubernetes/scheduler.conf
	
	I0214 01:01:35.967088  631285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 01:01:35.976265  631285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 01:01:35.984987  631285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 01:01:35.994377  631285 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:35.994456  631285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 01:01:36.004149  631285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 01:01:36.017133  631285 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:36.017245  631285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 01:01:36.028439  631285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 01:01:36.039723  631285 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0214 01:01:36.039751  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:36.108921  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:37.437487  631285 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.328532392s)
	I0214 01:01:37.437518  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:37.622884  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:37.715539  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:37.805030  631285 api_server.go:52] waiting for apiserver process to appear ...
	I0214 01:01:37.805116  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:38.306139  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:38.805918  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:38.846408  631285 api_server.go:72] duration metric: took 1.041379823s to wait for apiserver process to appear ...
	I0214 01:01:38.846430  631285 api_server.go:88] waiting for apiserver healthz status ...
	I0214 01:01:38.846449  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:36.945992  634014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040612.pem && ln -fs /usr/share/ca-certificates/5040612.pem /etc/ssl/certs/5040612.pem"
	I0214 01:01:36.956272  634014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5040612.pem
	I0214 01:01:36.960667  634014 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 00:26 /usr/share/ca-certificates/5040612.pem
	I0214 01:01:36.960770  634014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040612.pem
	I0214 01:01:36.968302  634014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5040612.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 01:01:36.977699  634014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 01:01:36.988114  634014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 01:01:36.992332  634014 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 00:19 /usr/share/ca-certificates/minikubeCA.pem
	I0214 01:01:36.992424  634014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 01:01:37.000305  634014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 01:01:37.012395  634014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/504061.pem && ln -fs /usr/share/ca-certificates/504061.pem /etc/ssl/certs/504061.pem"
	I0214 01:01:37.025767  634014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/504061.pem
	I0214 01:01:37.031307  634014 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 00:26 /usr/share/ca-certificates/504061.pem
	I0214 01:01:37.031431  634014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/504061.pem
	I0214 01:01:37.040812  634014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/504061.pem /etc/ssl/certs/51391683.0"
	I0214 01:01:37.054116  634014 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 01:01:37.058642  634014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0214 01:01:37.066475  634014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0214 01:01:37.074125  634014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0214 01:01:37.081631  634014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0214 01:01:37.089201  634014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0214 01:01:37.096765  634014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0214 01:01:37.104395  634014 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-727193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-727193 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 01:01:37.104514  634014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 01:01:37.104606  634014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 01:01:37.151631  634014 cri.go:89] found id: "0109f740290088b7202e241f7ae1957acea5d15767696c34a578b777ab87b18c"
	I0214 01:01:37.151657  634014 cri.go:89] found id: "cd9c9fc6ba8ae7d0d9b620b4a1c051da4efd655c1f08daa800c6daeb57c0742f"
	I0214 01:01:37.151664  634014 cri.go:89] found id: "a8ce4a9548d0710ce47e6d3beacb8efc9b127165791e26a553d5b41680bd53a5"
	I0214 01:01:37.151669  634014 cri.go:89] found id: "d674f0f4150a7965322423bb19e9b9113ce1f6ba21bdb0c44c8e522311274514"
	I0214 01:01:37.151706  634014 cri.go:89] found id: ""
	I0214 01:01:37.151785  634014 ssh_runner.go:195] Run: sudo runc list -f json
	I0214 01:01:37.172945  634014 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0109f740290088b7202e241f7ae1957acea5d15767696c34a578b777ab87b18c","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0109f740290088b7202e241f7ae1957acea5d15767696c34a578b777ab87b18c/userdata","rootfs":"/var/lib/containers/storage/overlay/804f2f230ce805c361f5fdcb92779bcb8f0de4721251a260962b2ba6b6d595f0/merged","created":"2024-02-14T01:01:34.422061962Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7d8a0274","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7d8a0274\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0109f740290088b7202e241f7ae1957acea5d15767696c34a578b777ab87b18c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:01:34.365416437Z","io.kubernetes.cri-o.Image":"488ec30dc9be36a34ccaa38325b2aceb0edcf83a0bdd086caee32699017b6342","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.29.0-rc.2","io.kubernetes.cri-o.ImageRef":"488ec30dc9be36a34ccaa38325b2aceb0edcf83a0bdd086caee32699017b6342","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-727193\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4e24326c8fdfc00d1887765024fa6413\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-727193_4e24326c8fdfc00d1887765024fa6413/kube-scheduler/2.log","io.kubernetes.cri-o.Metadata":"{\"name\
":\"kube-scheduler\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/804f2f230ce805c361f5fdcb92779bcb8f0de4721251a260962b2ba6b6d595f0/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-727193_kube-system_4e24326c8fdfc00d1887765024fa6413_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e2a0062b9aafd986c55bff4abe8366bb92bd4837aace766d02adeabac2d8ff54/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e2a0062b9aafd986c55bff4abe8366bb92bd4837aace766d02adeabac2d8ff54","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-727193_kube-system_4e24326c8fdfc00d1887765024fa6413_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4e24326c8fdfc00d1887765024fa6413/etc-hosts\",\"readonly\":
false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4e24326c8fdfc00d1887765024fa6413/containers/kube-scheduler/7a221040\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-727193","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4e24326c8fdfc00d1887765024fa6413","kubernetes.io/config.hash":"4e24326c8fdfc00d1887765024fa6413","kubernetes.io/config.seen":"2024-02-14T01:01:21.713657926Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a8ce4a9548d0710ce47e6d3beacb8efc9b127165791e26a553d5b41680bd53a5","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a8ce4a9548d0
710ce47e6d3beacb8efc9b127165791e26a553d5b41680bd53a5/userdata","rootfs":"/var/lib/containers/storage/overlay/8ebc14a3703912c955ddfd72cd75428aeaa86821484fd63fa1a5c96c9fb2614c/merged","created":"2024-02-14T01:01:34.404390779Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"47def728","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"6","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"47def728\",\"io.kubernetes.container.restartCount\":\"6\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a8ce4a9548d0710ce47e6d3beacb8efc9b127165791e26a553d5b41680bd53a5","io.kubernetes.cri-o.ContainerType":"container","io.kuberne
tes.cri-o.Created":"2024-02-14T01:01:34.321062964Z","io.kubernetes.cri-o.Image":"be43264efd65f9d0fbfa54d4437342579305212f27de5103f5bfcb6d1b6ffb17","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2","io.kubernetes.cri-o.ImageRef":"be43264efd65f9d0fbfa54d4437342579305212f27de5103f5bfcb6d1b6ffb17","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-727193\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72179ca93bc380126b383b83a9612d5e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-727193_72179ca93bc380126b383b83a9612d5e/kube-controller-manager/6.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":6}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8ebc14a3703912c955ddfd72cd75428aeaa86821484fd63fa1a5c96c9fb2614c/merged","io.ku
bernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-727193_kube-system_72179ca93bc380126b383b83a9612d5e_6","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f965d872d9b993d146202a8b41eb351a207f48ad914396f77f72611e1f858fb3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f965d872d9b993d146202a8b41eb351a207f48ad914396f77f72611e1f858fb3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-727193_kube-system_72179ca93bc380126b383b83a9612d5e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72179ca93bc380126b383b83a9612d5e/containers/kube-controller-
manager/5d156b27\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72179ca93bc380126b383b83a9612d5e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\
"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-727193","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72179ca93bc380126b383b83a9612d5e","kubernetes.io/config.hash":"72179ca93bc380126b383b83a9612d5e","kubernetes.io/config.seen":"2024-02-14T01:01:21.713664883Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd9c9fc6ba8ae7d0d9b620b4a1c051da4efd655c1f08daa800c6daeb57c0742f","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/cd9c9fc6ba8ae7d0d9b620b4a1c051da4efd655c1f08daa800c6daeb57c0742f/userdata","rootfs":"/var/lib/containers/storage/overlay/60eafa78326b2d92779ee6c7fd8394805e0052684be96621a5ca26aae3083e1
3/merged","created":"2024-02-14T01:01:34.41691277Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"453c0162","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"6","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"453c0162\",\"io.kubernetes.container.restartCount\":\"6\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"cd9c9fc6ba8ae7d0d9b620b4a1c051da4efd655c1f08daa800c6daeb57c0742f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:01:34.342344314Z","io.kubernetes.cri-o.Image":"0dd9b8246cda61b71b39b714709ae99341d85b2427db97d6ee46e9bc287e441c","io.kubernetes.cri-o.ImageName":"re
gistry.k8s.io/kube-apiserver:v1.29.0-rc.2","io.kubernetes.cri-o.ImageRef":"0dd9b8246cda61b71b39b714709ae99341d85b2427db97d6ee46e9bc287e441c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-727193\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"dc8e9cce5bea67c2d88eadbf34106dbd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-727193_dc8e9cce5bea67c2d88eadbf34106dbd/kube-apiserver/6.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":6}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/60eafa78326b2d92779ee6c7fd8394805e0052684be96621a5ca26aae3083e13/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-727193_kube-system_dc8e9cce5bea67c2d88eadbf34106dbd_6","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/269f31c7380671e391301781a8466
aa57ab7163075b89d204360c6da0e933fe7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"269f31c7380671e391301781a8466aa57ab7163075b89d204360c6da0e933fe7","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-727193_kube-system_dc8e9cce5bea67c2d88eadbf34106dbd_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/dc8e9cce5bea67c2d88eadbf34106dbd/containers/kube-apiserver/3d8bdba5\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/dc8e9cce5bea67c2d88eadbf34106dbd/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"cont
ainer_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-727193","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"dc8e9cce5bea67c2d88eadbf34106dbd","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"dc8e9cce5bea67c2d88eadbf34106dbd","kubernetes.io/config.seen":"2024-02-14T01:
01:21.713663407Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d674f0f4150a7965322423bb19e9b9113ce1f6ba21bdb0c44c8e522311274514","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d674f0f4150a7965322423bb19e9b9113ce1f6ba21bdb0c44c8e522311274514/userdata","rootfs":"/var/lib/containers/storage/overlay/26db427a8ed54ede605e793d1155de5409145c9b94ae0d1284da687267039534/merged","created":"2024-02-14T01:01:34.349089532Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"afbb39ae","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"afbb39ae\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d674f0f4150a7965322423bb19e9b9113ce1f6ba21bdb0c44c8e522311274514","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-14T01:01:34.290501875Z","io.kubernetes.cri-o.Image":"79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.10-0","io.kubernetes.cri-o.ImageRef":"79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-727193\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8104f0bc6f7d3136815d94421b6c5388\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-727193_8104f0bc6f7d3136815d94421b6c5388/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.Mou
ntPoint":"/var/lib/containers/storage/overlay/26db427a8ed54ede605e793d1155de5409145c9b94ae0d1284da687267039534/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-kubernetes-upgrade-727193_kube-system_8104f0bc6f7d3136815d94421b6c5388_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4badcd495cbfdd455b46bbb42ae3fad54cc786bd9341e52bc7a2adf4701db3ba/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4badcd495cbfdd455b46bbb42ae3fad54cc786bd9341e52bc7a2adf4701db3ba","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-727193_kube-system_8104f0bc6f7d3136815d94421b6c5388_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8104f0bc6f7d3136815d94421b6c5388/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-l
og\",\"host_path\":\"/var/lib/kubelet/pods/8104f0bc6f7d3136815d94421b6c5388/containers/etcd/66284a6a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-727193","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8104f0bc6f7d3136815d94421b6c5388","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"8104f0bc6f7d3136815d94421b6c5388","kubernetes.io/config.seen":"2024-02-14T01:01:21.713661815Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0214 01:01:37.173323  634014 cri.go:126] list returned 4 containers
	I0214 01:01:37.173341  634014 cri.go:129] container: {ID:0109f740290088b7202e241f7ae1957acea5d15767696c34a578b777ab87b18c Status:stopped}
	I0214 01:01:37.173380  634014 cri.go:135] skipping {0109f740290088b7202e241f7ae1957acea5d15767696c34a578b777ab87b18c stopped}: state = "stopped", want "paused"
	I0214 01:01:37.173394  634014 cri.go:129] container: {ID:a8ce4a9548d0710ce47e6d3beacb8efc9b127165791e26a553d5b41680bd53a5 Status:stopped}
	I0214 01:01:37.173402  634014 cri.go:135] skipping {a8ce4a9548d0710ce47e6d3beacb8efc9b127165791e26a553d5b41680bd53a5 stopped}: state = "stopped", want "paused"
	I0214 01:01:37.173412  634014 cri.go:129] container: {ID:cd9c9fc6ba8ae7d0d9b620b4a1c051da4efd655c1f08daa800c6daeb57c0742f Status:stopped}
	I0214 01:01:37.173419  634014 cri.go:135] skipping {cd9c9fc6ba8ae7d0d9b620b4a1c051da4efd655c1f08daa800c6daeb57c0742f stopped}: state = "stopped", want "paused"
	I0214 01:01:37.173439  634014 cri.go:129] container: {ID:d674f0f4150a7965322423bb19e9b9113ce1f6ba21bdb0c44c8e522311274514 Status:stopped}
	I0214 01:01:37.173452  634014 cri.go:135] skipping {d674f0f4150a7965322423bb19e9b9113ce1f6ba21bdb0c44c8e522311274514 stopped}: state = "stopped", want "paused"
	I0214 01:01:37.173544  634014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 01:01:37.184134  634014 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0214 01:01:37.184168  634014 kubeadm.go:636] restartCluster start
	I0214 01:01:37.184253  634014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0214 01:01:37.193735  634014 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:37.194531  634014 kubeconfig.go:92] found "kubernetes-upgrade-727193" server: "https://192.168.67.2:8443"
	I0214 01:01:37.195845  634014 kapi.go:59] client config for kubernetes-upgrade-727193: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/client.key", CAFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 01:01:37.196822  634014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0214 01:01:37.206660  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:37.206745  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:37.217238  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:37.706754  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:37.706840  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:37.720314  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:38.206790  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:38.206920  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:38.216968  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:38.707237  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:38.707396  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:38.717391  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:39.206755  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:39.206859  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:39.219123  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:39.707752  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:39.707834  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:39.718404  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:40.206767  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:40.206866  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:40.219195  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:40.706789  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:40.706932  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:40.716973  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:41.207583  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:41.207745  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:41.220941  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:41.707497  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:41.707619  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:41.718461  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:42.890876  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0214 01:01:42.890914  631285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0214 01:01:42.890930  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:43.000857  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0214 01:01:43.000890  631285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0214 01:01:43.347303  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:43.357331  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0214 01:01:43.357362  631285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0214 01:01:43.846575  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:43.859369  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0214 01:01:43.859400  631285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0214 01:01:44.346572  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:44.357312  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0214 01:01:44.372759  631285 api_server.go:141] control plane version: v1.28.4
	I0214 01:01:44.372793  631285 api_server.go:131] duration metric: took 5.526354908s to wait for apiserver health ...
	I0214 01:01:44.372803  631285 cni.go:84] Creating CNI manager for ""
	I0214 01:01:44.372813  631285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 01:01:44.375361  631285 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 01:01:44.377383  631285 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 01:01:44.381175  631285 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0214 01:01:44.381195  631285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0214 01:01:44.399370  631285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 01:01:45.311589  631285 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 01:01:45.321589  631285 system_pods.go:59] 7 kube-system pods found
	I0214 01:01:45.321637  631285 system_pods.go:61] "coredns-5dd5756b68-blr8m" [79232acc-f48d-4b46-8c04-17e044441e02] Running
	I0214 01:01:45.321648  631285 system_pods.go:61] "etcd-pause-644788" [1fe50aac-82bf-4b34-a62a-de740c19f8a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 01:01:45.321827  631285 system_pods.go:61] "kindnet-nxl78" [2cd1ad76-088c-4810-9812-5fa72cc11eab] Running
	I0214 01:01:45.321840  631285 system_pods.go:61] "kube-apiserver-pause-644788" [f950e242-985f-4426-82e0-ca23a4b7b158] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 01:01:45.321861  631285 system_pods.go:61] "kube-controller-manager-pause-644788" [db5b3438-52ba-451c-b8a9-104888291481] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 01:01:45.321886  631285 system_pods.go:61] "kube-proxy-bnbc8" [c162e76e-4f54-45bb-908d-b3e05565dcad] Running
	I0214 01:01:45.321904  631285 system_pods.go:61] "kube-scheduler-pause-644788" [e2626363-c6c6-4342-b4ab-d6a6e90d4911] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 01:01:45.321915  631285 system_pods.go:74] duration metric: took 10.301354ms to wait for pod list to return data ...
	I0214 01:01:45.321933  631285 node_conditions.go:102] verifying NodePressure condition ...
	I0214 01:01:45.328779  631285 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 01:01:45.328820  631285 node_conditions.go:123] node cpu capacity is 2
	I0214 01:01:45.328833  631285 node_conditions.go:105] duration metric: took 6.894ms to run NodePressure ...
	I0214 01:01:45.328886  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:45.533970  631285 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0214 01:01:45.539386  631285 kubeadm.go:787] kubelet initialised
	I0214 01:01:45.539409  631285 kubeadm.go:788] duration metric: took 5.411467ms waiting for restarted kubelet to initialise ...
	I0214 01:01:45.539423  631285 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 01:01:45.548160  631285 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:45.554543  631285 pod_ready.go:92] pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:45.554571  631285 pod_ready.go:81] duration metric: took 6.37512ms waiting for pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:45.554584  631285 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:42.207443  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:42.218066  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:42.234828  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:42.706768  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:42.706937  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:42.719284  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:43.206790  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:43.206896  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:43.220233  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:43.706732  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:43.706880  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:43.719225  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:44.206758  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:44.206870  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:44.218256  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:44.706782  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:44.706919  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:44.717087  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:45.207440  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:45.207561  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:45.222643  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:45.707272  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:45.707371  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:45.718136  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:46.206726  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:46.206833  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:46.219147  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:46.706773  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:46.706878  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:46.718008  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:47.561289  631285 pod_ready.go:102] pod "etcd-pause-644788" in "kube-system" namespace has status "Ready":"False"
	I0214 01:01:49.563588  631285 pod_ready.go:102] pod "etcd-pause-644788" in "kube-system" namespace has status "Ready":"False"
	I0214 01:01:50.562398  631285 pod_ready.go:92] pod "etcd-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:50.562461  631285 pod_ready.go:81] duration metric: took 5.007836503s waiting for pod "etcd-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:50.562483  631285 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:47.206907  634014 api_server.go:166] Checking apiserver status ...
	I0214 01:01:47.207011  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0214 01:01:47.218831  634014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:47.218865  634014 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0214 01:01:47.218875  634014 kubeadm.go:1135] stopping kube-system containers ...
	I0214 01:01:47.218885  634014 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0214 01:01:47.218938  634014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 01:01:47.276290  634014 cri.go:89] found id: "6d809d670ac39f3fcee90a17f09c841df74aa17e5d9d3bf527f8599867b1529b"
	I0214 01:01:47.276314  634014 cri.go:89] found id: "0109f740290088b7202e241f7ae1957acea5d15767696c34a578b777ab87b18c"
	I0214 01:01:47.276327  634014 cri.go:89] found id: "cd9c9fc6ba8ae7d0d9b620b4a1c051da4efd655c1f08daa800c6daeb57c0742f"
	I0214 01:01:47.276332  634014 cri.go:89] found id: "a8ce4a9548d0710ce47e6d3beacb8efc9b127165791e26a553d5b41680bd53a5"
	I0214 01:01:47.276337  634014 cri.go:89] found id: "d674f0f4150a7965322423bb19e9b9113ce1f6ba21bdb0c44c8e522311274514"
	I0214 01:01:47.276345  634014 cri.go:89] found id: ""
	I0214 01:01:47.276350  634014 cri.go:234] Stopping containers: [6d809d670ac39f3fcee90a17f09c841df74aa17e5d9d3bf527f8599867b1529b 0109f740290088b7202e241f7ae1957acea5d15767696c34a578b777ab87b18c cd9c9fc6ba8ae7d0d9b620b4a1c051da4efd655c1f08daa800c6daeb57c0742f a8ce4a9548d0710ce47e6d3beacb8efc9b127165791e26a553d5b41680bd53a5 d674f0f4150a7965322423bb19e9b9113ce1f6ba21bdb0c44c8e522311274514]
	I0214 01:01:47.276416  634014 ssh_runner.go:195] Run: which crictl
	I0214 01:01:47.280692  634014 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 6d809d670ac39f3fcee90a17f09c841df74aa17e5d9d3bf527f8599867b1529b 0109f740290088b7202e241f7ae1957acea5d15767696c34a578b777ab87b18c cd9c9fc6ba8ae7d0d9b620b4a1c051da4efd655c1f08daa800c6daeb57c0742f a8ce4a9548d0710ce47e6d3beacb8efc9b127165791e26a553d5b41680bd53a5 d674f0f4150a7965322423bb19e9b9113ce1f6ba21bdb0c44c8e522311274514
	I0214 01:01:47.480016  634014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0214 01:01:47.576823  634014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 01:01:47.585375  634014 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5651 Feb 14 01:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 14 01:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Feb 14 01:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 14 01:01 /etc/kubernetes/scheduler.conf
	
	I0214 01:01:47.585440  634014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 01:01:47.594063  634014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 01:01:47.602692  634014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 01:01:47.610691  634014 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:47.610758  634014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 01:01:47.619261  634014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 01:01:47.627618  634014 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0214 01:01:47.627690  634014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 01:01:47.635974  634014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 01:01:47.644824  634014 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0214 01:01:47.644851  634014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:47.695367  634014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:49.745768  634014 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.050365117s)
	I0214 01:01:49.745796  634014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:49.939338  634014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:50.017211  634014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:50.113271  634014 api_server.go:52] waiting for apiserver process to appear ...
	I0214 01:01:50.113354  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:50.613970  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:51.113483  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:51.131860  634014 api_server.go:72] duration metric: took 1.018588839s to wait for apiserver process to appear ...
	I0214 01:01:51.131889  634014 api_server.go:88] waiting for apiserver healthz status ...
	I0214 01:01:51.131909  634014 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0214 01:01:52.574372  631285 pod_ready.go:102] pod "kube-apiserver-pause-644788" in "kube-system" namespace has status "Ready":"False"
	I0214 01:01:55.070021  631285 pod_ready.go:102] pod "kube-apiserver-pause-644788" in "kube-system" namespace has status "Ready":"False"
	I0214 01:01:56.081497  631285 pod_ready.go:92] pod "kube-apiserver-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.081525  631285 pod_ready.go:81] duration metric: took 5.519033491s waiting for pod "kube-apiserver-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.081537  631285 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.102776  631285 pod_ready.go:92] pod "kube-controller-manager-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.102809  631285 pod_ready.go:81] duration metric: took 21.255224ms waiting for pod "kube-controller-manager-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.102824  631285 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bnbc8" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.116577  631285 pod_ready.go:92] pod "kube-proxy-bnbc8" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.116602  631285 pod_ready.go:81] duration metric: took 13.771099ms waiting for pod "kube-proxy-bnbc8" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.116614  631285 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.129681  631285 pod_ready.go:92] pod "kube-scheduler-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.129714  631285 pod_ready.go:81] duration metric: took 13.083301ms waiting for pod "kube-scheduler-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.129754  631285 pod_ready.go:38] duration metric: took 10.590316695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 01:01:56.129784  631285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 01:01:56.150819  631285 ops.go:34] apiserver oom_adj: -16
	I0214 01:01:56.150899  631285 kubeadm.go:640] restartCluster took 30.475893583s
	I0214 01:01:56.150924  631285 kubeadm.go:406] StartCluster complete in 30.621349297s
	I0214 01:01:56.150972  631285 settings.go:142] acquiring lock: {Name:mk6da46f5cb0f714c2fcf3244fbf0dfa768ab578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 01:01:56.151054  631285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 01:01:56.152049  631285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/kubeconfig: {Name:mke09ed5dbaa4240bee61fddd1ec0468d82bdfbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 01:01:56.153023  631285 kapi.go:59] client config for pause-644788: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/pause-644788/client.key", CAFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 01:01:56.153267  631285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 01:01:56.153265  631285 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0214 01:01:56.156139  631285 out.go:177] * Enabled addons: 
	I0214 01:01:54.904237  634014 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0214 01:01:54.904262  634014 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0214 01:01:54.904275  634014 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0214 01:01:55.027476  634014 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0214 01:01:55.027512  634014 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0214 01:01:55.132683  634014 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0214 01:01:55.190664  634014 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0214 01:01:55.190746  634014 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0214 01:01:55.632189  634014 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0214 01:01:55.640316  634014 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0214 01:01:55.640350  634014 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0214 01:01:56.132942  634014 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0214 01:01:56.141946  634014 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0214 01:01:56.167155  634014 api_server.go:141] control plane version: v1.29.0-rc.2
	I0214 01:01:56.167184  634014 api_server.go:131] duration metric: took 5.035288092s to wait for apiserver health ...
	I0214 01:01:56.167195  634014 cni.go:84] Creating CNI manager for ""
	I0214 01:01:56.167207  634014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 01:01:56.153515  631285 config.go:182] Loaded profile config "pause-644788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 01:01:56.158052  631285 addons.go:505] enable addons completed in 4.7904ms: enabled=[]
	I0214 01:01:56.167103  631285 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-644788" context rescaled to 1 replicas
	I0214 01:01:56.167246  631285 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 01:01:56.169356  634014 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 01:01:56.169336  631285 out.go:177] * Verifying Kubernetes components...
	I0214 01:01:56.172259  634014 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 01:01:56.178680  634014 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl ...
	I0214 01:01:56.178699  634014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0214 01:01:56.203040  634014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 01:01:56.708586  634014 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 01:01:56.715802  634014 system_pods.go:59] 5 kube-system pods found
	I0214 01:01:56.715840  634014 system_pods.go:61] "etcd-kubernetes-upgrade-727193" [0c35de28-b98d-4d3b-84e8-80b0da8f5341] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 01:01:56.715849  634014 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-727193" [056cab3c-675f-4285-a6df-1f06baba33c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 01:01:56.715862  634014 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-727193" [5ff1bd9f-59b2-4c91-8a7b-d738b32779fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 01:01:56.715877  634014 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-727193" [54c44d00-c879-4e08-a541-20664e055cac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 01:01:56.715888  634014 system_pods.go:61] "storage-provisioner" [e194f502-0e13-46b6-829c-de6eb99fa84f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0214 01:01:56.715898  634014 system_pods.go:74] duration metric: took 7.290059ms to wait for pod list to return data ...
	I0214 01:01:56.715911  634014 node_conditions.go:102] verifying NodePressure condition ...
	I0214 01:01:56.719306  634014 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 01:01:56.719338  634014 node_conditions.go:123] node cpu capacity is 2
	I0214 01:01:56.719350  634014 node_conditions.go:105] duration metric: took 3.433528ms to run NodePressure ...
	I0214 01:01:56.719367  634014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 01:01:56.974445  634014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 01:01:56.983189  634014 ops.go:34] apiserver oom_adj: -16
	I0214 01:01:56.983249  634014 kubeadm.go:640] restartCluster took 19.799073677s
	I0214 01:01:56.983284  634014 kubeadm.go:406] StartCluster complete in 19.878889309s
	I0214 01:01:56.983332  634014 settings.go:142] acquiring lock: {Name:mk6da46f5cb0f714c2fcf3244fbf0dfa768ab578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 01:01:56.983423  634014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 01:01:56.984504  634014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/kubeconfig: {Name:mke09ed5dbaa4240bee61fddd1ec0468d82bdfbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 01:01:56.984762  634014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 01:01:56.985019  634014 config.go:182] Loaded profile config "kubernetes-upgrade-727193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0214 01:01:56.985124  634014 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0214 01:01:56.985199  634014 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-727193"
	I0214 01:01:56.985215  634014 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-727193"
	W0214 01:01:56.985227  634014 addons.go:243] addon storage-provisioner should already be in state true
	I0214 01:01:56.985270  634014 host.go:66] Checking if "kubernetes-upgrade-727193" exists ...
	I0214 01:01:56.985685  634014 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-727193 --format={{.State.Status}}
	I0214 01:01:56.985662  634014 kapi.go:59] client config for kubernetes-upgrade-727193: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/client.key", CAFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 01:01:56.986175  634014 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-727193"
	I0214 01:01:56.986219  634014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-727193"
	I0214 01:01:56.986516  634014 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-727193 --format={{.State.Status}}
	I0214 01:01:56.993042  634014 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-727193" context rescaled to 1 replicas
	I0214 01:01:56.993076  634014 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 01:01:57.000427  634014 out.go:177] * Verifying Kubernetes components...
	I0214 01:01:57.003267  634014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 01:01:57.029283  634014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 01:01:57.032208  634014 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 01:01:57.032230  634014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 01:01:57.032293  634014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-727193
	I0214 01:01:57.035615  634014 kapi.go:59] client config for kubernetes-upgrade-727193: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kubernetes-upgrade-727193/client.key", CAFile:"/home/jenkins/minikube-integration/18169-498689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 01:01:57.035898  634014 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-727193"
	W0214 01:01:57.035925  634014 addons.go:243] addon default-storageclass should already be in state true
	I0214 01:01:57.035954  634014 host.go:66] Checking if "kubernetes-upgrade-727193" exists ...
	I0214 01:01:57.036436  634014 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-727193 --format={{.State.Status}}
	I0214 01:01:57.066390  634014 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 01:01:57.066420  634014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 01:01:57.066488  634014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-727193
	I0214 01:01:57.081898  634014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/kubernetes-upgrade-727193/id_rsa Username:docker}
	I0214 01:01:57.105881  634014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/kubernetes-upgrade-727193/id_rsa Username:docker}
	I0214 01:01:57.164454  634014 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0214 01:01:57.164521  634014 api_server.go:52] waiting for apiserver process to appear ...
	I0214 01:01:57.164593  634014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:57.186180  634014 api_server.go:72] duration metric: took 193.074593ms to wait for apiserver process to appear ...
	I0214 01:01:57.186204  634014 api_server.go:88] waiting for apiserver healthz status ...
	I0214 01:01:57.186249  634014 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0214 01:01:57.208330  634014 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0214 01:01:57.209508  634014 api_server.go:141] control plane version: v1.29.0-rc.2
	I0214 01:01:57.209533  634014 api_server.go:131] duration metric: took 23.320876ms to wait for apiserver health ...
	I0214 01:01:57.209542  634014 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 01:01:57.216069  634014 system_pods.go:59] 5 kube-system pods found
	I0214 01:01:57.216101  634014 system_pods.go:61] "etcd-kubernetes-upgrade-727193" [0c35de28-b98d-4d3b-84e8-80b0da8f5341] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 01:01:57.216140  634014 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-727193" [056cab3c-675f-4285-a6df-1f06baba33c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 01:01:57.216158  634014 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-727193" [5ff1bd9f-59b2-4c91-8a7b-d738b32779fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 01:01:57.216173  634014 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-727193" [54c44d00-c879-4e08-a541-20664e055cac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 01:01:57.216184  634014 system_pods.go:61] "storage-provisioner" [e194f502-0e13-46b6-829c-de6eb99fa84f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0214 01:01:57.216194  634014 system_pods.go:74] duration metric: took 6.645625ms to wait for pod list to return data ...
	I0214 01:01:57.216219  634014 kubeadm.go:581] duration metric: took 223.118549ms to wait for : map[apiserver:true system_pods:true] ...
	I0214 01:01:57.216248  634014 node_conditions.go:102] verifying NodePressure condition ...
	I0214 01:01:57.219133  634014 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 01:01:57.219155  634014 node_conditions.go:123] node cpu capacity is 2
	I0214 01:01:57.219165  634014 node_conditions.go:105] duration metric: took 2.912399ms to run NodePressure ...
	I0214 01:01:57.219193  634014 start.go:228] waiting for startup goroutines ...
	I0214 01:01:57.258528  634014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 01:01:57.267021  634014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 01:01:58.159795  634014 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0214 01:01:58.162014  634014 addons.go:505] enable addons completed in 1.176883588s: enabled=[storage-provisioner default-storageclass]
	I0214 01:01:58.162062  634014 start.go:233] waiting for cluster config update ...
	I0214 01:01:58.162077  634014 start.go:242] writing updated cluster config ...
	I0214 01:01:58.162373  634014 ssh_runner.go:195] Run: rm -f paused
	I0214 01:01:58.219519  634014 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0214 01:01:58.222184  634014 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-727193" cluster and "default" namespace by default
	I0214 01:01:56.172439  631285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 01:01:56.327490  631285 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0214 01:01:56.327590  631285 node_ready.go:35] waiting up to 6m0s for node "pause-644788" to be "Ready" ...
	I0214 01:01:56.330767  631285 node_ready.go:49] node "pause-644788" has status "Ready":"True"
	I0214 01:01:56.330827  631285 node_ready.go:38] duration metric: took 3.197074ms waiting for node "pause-644788" to be "Ready" ...
	I0214 01:01:56.330860  631285 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 01:01:56.337299  631285 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.470054  631285 pod_ready.go:92] pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.470122  631285 pod_ready.go:81] duration metric: took 132.743373ms waiting for pod "coredns-5dd5756b68-blr8m" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.470149  631285 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.866876  631285 pod_ready.go:92] pod "etcd-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:56.866949  631285 pod_ready.go:81] duration metric: took 396.777806ms waiting for pod "etcd-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:56.866979  631285 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:57.266075  631285 pod_ready.go:92] pod "kube-apiserver-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:57.266100  631285 pod_ready.go:81] duration metric: took 399.100268ms waiting for pod "kube-apiserver-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:57.266117  631285 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:57.667148  631285 pod_ready.go:92] pod "kube-controller-manager-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:57.667175  631285 pod_ready.go:81] duration metric: took 401.050048ms waiting for pod "kube-controller-manager-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:57.667187  631285 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bnbc8" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:58.067490  631285 pod_ready.go:92] pod "kube-proxy-bnbc8" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:58.067517  631285 pod_ready.go:81] duration metric: took 400.321782ms waiting for pod "kube-proxy-bnbc8" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:58.067529  631285 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:58.466140  631285 pod_ready.go:92] pod "kube-scheduler-pause-644788" in "kube-system" namespace has status "Ready":"True"
	I0214 01:01:58.466167  631285 pod_ready.go:81] duration metric: took 398.630511ms waiting for pod "kube-scheduler-pause-644788" in "kube-system" namespace to be "Ready" ...
	I0214 01:01:58.466178  631285 pod_ready.go:38] duration metric: took 2.135292649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 01:01:58.466192  631285 api_server.go:52] waiting for apiserver process to appear ...
	I0214 01:01:58.466258  631285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 01:01:58.478074  631285 api_server.go:72] duration metric: took 2.310779349s to wait for apiserver process to appear ...
	I0214 01:01:58.478101  631285 api_server.go:88] waiting for apiserver healthz status ...
	I0214 01:01:58.478124  631285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 01:01:58.487582  631285 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0214 01:01:58.489462  631285 api_server.go:141] control plane version: v1.28.4
	I0214 01:01:58.489483  631285 api_server.go:131] duration metric: took 11.375455ms to wait for apiserver health ...
	I0214 01:01:58.489491  631285 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 01:01:58.669807  631285 system_pods.go:59] 7 kube-system pods found
	I0214 01:01:58.669840  631285 system_pods.go:61] "coredns-5dd5756b68-blr8m" [79232acc-f48d-4b46-8c04-17e044441e02] Running
	I0214 01:01:58.669847  631285 system_pods.go:61] "etcd-pause-644788" [1fe50aac-82bf-4b34-a62a-de740c19f8a0] Running
	I0214 01:01:58.669853  631285 system_pods.go:61] "kindnet-nxl78" [2cd1ad76-088c-4810-9812-5fa72cc11eab] Running
	I0214 01:01:58.669858  631285 system_pods.go:61] "kube-apiserver-pause-644788" [f950e242-985f-4426-82e0-ca23a4b7b158] Running
	I0214 01:01:58.669864  631285 system_pods.go:61] "kube-controller-manager-pause-644788" [db5b3438-52ba-451c-b8a9-104888291481] Running
	I0214 01:01:58.669870  631285 system_pods.go:61] "kube-proxy-bnbc8" [c162e76e-4f54-45bb-908d-b3e05565dcad] Running
	I0214 01:01:58.669875  631285 system_pods.go:61] "kube-scheduler-pause-644788" [e2626363-c6c6-4342-b4ab-d6a6e90d4911] Running
	I0214 01:01:58.669881  631285 system_pods.go:74] duration metric: took 180.385029ms to wait for pod list to return data ...
	I0214 01:01:58.669895  631285 default_sa.go:34] waiting for default service account to be created ...
	I0214 01:01:58.866480  631285 default_sa.go:45] found service account: "default"
	I0214 01:01:58.866505  631285 default_sa.go:55] duration metric: took 196.598335ms for default service account to be created ...
	I0214 01:01:58.866516  631285 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 01:01:59.069051  631285 system_pods.go:86] 7 kube-system pods found
	I0214 01:01:59.069084  631285 system_pods.go:89] "coredns-5dd5756b68-blr8m" [79232acc-f48d-4b46-8c04-17e044441e02] Running
	I0214 01:01:59.069092  631285 system_pods.go:89] "etcd-pause-644788" [1fe50aac-82bf-4b34-a62a-de740c19f8a0] Running
	I0214 01:01:59.069124  631285 system_pods.go:89] "kindnet-nxl78" [2cd1ad76-088c-4810-9812-5fa72cc11eab] Running
	I0214 01:01:59.069133  631285 system_pods.go:89] "kube-apiserver-pause-644788" [f950e242-985f-4426-82e0-ca23a4b7b158] Running
	I0214 01:01:59.069139  631285 system_pods.go:89] "kube-controller-manager-pause-644788" [db5b3438-52ba-451c-b8a9-104888291481] Running
	I0214 01:01:59.069150  631285 system_pods.go:89] "kube-proxy-bnbc8" [c162e76e-4f54-45bb-908d-b3e05565dcad] Running
	I0214 01:01:59.069155  631285 system_pods.go:89] "kube-scheduler-pause-644788" [e2626363-c6c6-4342-b4ab-d6a6e90d4911] Running
	I0214 01:01:59.069169  631285 system_pods.go:126] duration metric: took 202.646728ms to wait for k8s-apps to be running ...
	I0214 01:01:59.069176  631285 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 01:01:59.069262  631285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 01:01:59.081382  631285 system_svc.go:56] duration metric: took 12.195224ms WaitForService to wait for kubelet.
	I0214 01:01:59.081406  631285 kubeadm.go:581] duration metric: took 2.914120041s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0214 01:01:59.081426  631285 node_conditions.go:102] verifying NodePressure condition ...
	I0214 01:01:59.266169  631285 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 01:01:59.266200  631285 node_conditions.go:123] node cpu capacity is 2
	I0214 01:01:59.266213  631285 node_conditions.go:105] duration metric: took 184.782389ms to run NodePressure ...
	I0214 01:01:59.266225  631285 start.go:228] waiting for startup goroutines ...
	I0214 01:01:59.266232  631285 start.go:233] waiting for cluster config update ...
	I0214 01:01:59.266239  631285 start.go:242] writing updated cluster config ...
	I0214 01:01:59.266885  631285 ssh_runner.go:195] Run: rm -f paused
	I0214 01:01:59.353523  631285 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0214 01:01:59.356670  631285 out.go:177] * Done! kubectl is now configured to use "pause-644788" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.014188281Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-blr8m/coredns" id=2b6c89c7-e7d5-421d-aa90-1fdddf4ad4c5 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.014240334Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.059680002Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/07be62f0e7298d334539e6dcb87d40860031f6d6c382e795d4a42a64ce063b47/merged/etc/passwd: no such file or directory"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.059730193Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/07be62f0e7298d334539e6dcb87d40860031f6d6c382e795d4a42a64ce063b47/merged/etc/group: no such file or directory"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.134974861Z" level=info msg="Created container 3aa4c606a1bc4206d33a1d001f935a220af8a72cc86b8e0aefa1a08f05563813: kube-system/coredns-5dd5756b68-blr8m/coredns" id=2b6c89c7-e7d5-421d-aa90-1fdddf4ad4c5 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.135618287Z" level=info msg="Starting container: 3aa4c606a1bc4206d33a1d001f935a220af8a72cc86b8e0aefa1a08f05563813" id=da4b2fad-62fc-44ec-8eb3-0f141d1db487 name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.138974801Z" level=info msg="Created container 3a94bffbddc1e5f904a66903ec026de4cb4ae859959a8e5ce5d1d9280ef749f3: kube-system/kindnet-nxl78/kindnet-cni" id=1de47d64-a721-4b1d-b1ec-42bbaf171f14 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.139508623Z" level=info msg="Starting container: 3a94bffbddc1e5f904a66903ec026de4cb4ae859959a8e5ce5d1d9280ef749f3" id=745b635e-7ef6-4a9a-8a0f-90f1199113f5 name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.148312601Z" level=info msg="Created container 94bb48cca6c1a802e0a08903626abc1c974b7d0a5fcb03476c3a029f626a285a: kube-system/kube-proxy-bnbc8/kube-proxy" id=b25f79e6-85c5-4bc1-ba6c-59bf9e9d0b88 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.148967981Z" level=info msg="Starting container: 94bb48cca6c1a802e0a08903626abc1c974b7d0a5fcb03476c3a029f626a285a" id=78e7998d-d7b5-499e-b176-8bbd209a37fe name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.154387202Z" level=info msg="Started container" PID=3256 containerID=3aa4c606a1bc4206d33a1d001f935a220af8a72cc86b8e0aefa1a08f05563813 description=kube-system/coredns-5dd5756b68-blr8m/coredns id=da4b2fad-62fc-44ec-8eb3-0f141d1db487 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9c06cd1598992ebca09f7225fc3aca16471488793bbee8edc87dc60b0ce23fe
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.160796045Z" level=info msg="Started container" PID=3234 containerID=3a94bffbddc1e5f904a66903ec026de4cb4ae859959a8e5ce5d1d9280ef749f3 description=kube-system/kindnet-nxl78/kindnet-cni id=745b635e-7ef6-4a9a-8a0f-90f1199113f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=77eeda6873546623eea6b4ee1ac10115368a58b8d2e1d8374c920b394e9bf798
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.168016583Z" level=info msg="Started container" PID=3247 containerID=94bb48cca6c1a802e0a08903626abc1c974b7d0a5fcb03476c3a029f626a285a description=kube-system/kube-proxy-bnbc8/kube-proxy id=78e7998d-d7b5-499e-b176-8bbd209a37fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b25c68a7bf1193bcecf8715eb16c0f700a1cbb92a5fb2b8af867aba2ebae310
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.516904562Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.527498427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.527536499Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.527553393Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.531132208Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.531168696Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.531186640Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.534987829Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.535025974Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.535043098Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.545255993Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.545290709Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3aa4c606a1bc4       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   17 seconds ago      Running             coredns                   2                   b9c06cd159899       coredns-5dd5756b68-blr8m
	94bb48cca6c1a       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   17 seconds ago      Running             kube-proxy                2                   3b25c68a7bf11       kube-proxy-bnbc8
	3a94bffbddc1e       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   17 seconds ago      Running             kindnet-cni               2                   77eeda6873546       kindnet-nxl78
	11b70319d2ea5       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   22 seconds ago      Running             kube-apiserver            2                   7af8c2f321950       kube-apiserver-pause-644788
	846a400f113c3       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   22 seconds ago      Running             kube-controller-manager   2                   e28b841896c87       kube-controller-manager-pause-644788
	bb4d5dfa385f8       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   22 seconds ago      Running             kube-scheduler            2                   501097f7d088d       kube-scheduler-pause-644788
	bdf65837da609       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   22 seconds ago      Running             etcd                      2                   706bddbd852f8       etcd-pause-644788
	fc2fa8ea8932e       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   48 seconds ago      Exited              coredns                   1                   b9c06cd159899       coredns-5dd5756b68-blr8m
	1e27fa7d6c327       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   48 seconds ago      Exited              kube-scheduler            1                   501097f7d088d       kube-scheduler-pause-644788
	9c5669437ab68       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   48 seconds ago      Exited              kube-apiserver            1                   7af8c2f321950       kube-apiserver-pause-644788
	6566d451e3d5e       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   48 seconds ago      Exited              kube-proxy                1                   3b25c68a7bf11       kube-proxy-bnbc8
	827ba3fcdf829       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   48 seconds ago      Exited              kindnet-cni               1                   77eeda6873546       kindnet-nxl78
	b49257c924606       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   48 seconds ago      Exited              kube-controller-manager   1                   e28b841896c87       kube-controller-manager-pause-644788
	378b9e0fb60b3       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   48 seconds ago      Exited              etcd                      1                   706bddbd852f8       etcd-pause-644788
	
	
	==> coredns [3aa4c606a1bc4206d33a1d001f935a220af8a72cc86b8e0aefa1a08f05563813] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40019 - 40649 "HINFO IN 1454937829151704872.3502806487380847386. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023207589s
	
	
	==> coredns [fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:33156 - 52852 "HINFO IN 5168320891635668640.7268165789834317859. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024913489s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-644788
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-644788
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802
	                    minikube.k8s.io/name=pause-644788
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T01_00_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 01:00:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-644788
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 01:01:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 01:01:43 +0000   Wed, 14 Feb 2024 01:00:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 01:01:43 +0000   Wed, 14 Feb 2024 01:00:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 01:01:43 +0000   Wed, 14 Feb 2024 01:00:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 01:01:43 +0000   Wed, 14 Feb 2024 01:01:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-644788
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 c2b8222b29e84a9a8d9facb997f9c4fa
	  System UUID:                2f5f1a86-56ac-4774-a975-dd60e101ebd8
	  Boot ID:                    abc429c2-787e-4b53-ac30-814ea59b0a0f
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-blr8m                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     89s
	  kube-system                 etcd-pause-644788                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         102s
	  kube-system                 kindnet-nxl78                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      89s
	  kube-system                 kube-apiserver-pause-644788             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-controller-manager-pause-644788    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-bnbc8                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-pause-644788             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 87s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node pause-644788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet          Node pause-644788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x8 over 111s)  kubelet          Node pause-644788 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node pause-644788 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node pause-644788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node pause-644788 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           90s                  node-controller  Node pause-644788 event: Registered Node pause-644788 in Controller
	  Normal  NodeReady                59s                  kubelet          Node pause-644788 status is now: NodeReady
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 24s)    kubelet          Node pause-644788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 24s)    kubelet          Node pause-644788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x8 over 24s)    kubelet          Node pause-644788 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                   node-controller  Node pause-644788 event: Registered Node pause-644788 in Controller
	
	
	==> dmesg <==
	[  +0.001098] FS-Cache: N-key=[8] '523c5c0100000000'
	[  +0.009444] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=000000001ec5b948
	[  +0.001090] FS-Cache: O-key=[8] '523c5c0100000000'
	[  +0.000791] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000962] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=0000000044435d8b
	[  +0.001054] FS-Cache: N-key=[8] '523c5c0100000000'
	[  +3.160002] FS-Cache: Duplicate cookie detected
	[  +0.000835] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001129] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=0000000086378eab
	[  +0.001313] FS-Cache: O-key=[8] '513c5c0100000000'
	[  +0.000789] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001103] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=0000000073249069
	[  +0.001281] FS-Cache: N-key=[8] '513c5c0100000000'
	[  +0.406244] FS-Cache: Duplicate cookie detected
	[  +0.000751] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001144] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=00000000fcf6afbd
	[  +0.001081] FS-Cache: O-key=[8] '573c5c0100000000'
	[  +0.000734] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001120] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=00000000e456345f
	[  +0.001247] FS-Cache: N-key=[8] '573c5c0100000000'
	[Feb14 00:56] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Feb14 00:59] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.381127] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15] <==
	{"level":"info","ts":"2024-02-14T01:01:14.047217Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-14T01:01:15.123259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-14T01:01:15.123378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-14T01:01:15.125452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-02-14T01:01:15.125655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:15.125697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:15.125964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:15.126022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:15.128387Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-644788 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T01:01:15.138613Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T01:01:15.138651Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T01:01:15.140905Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-14T01:01:15.138728Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T01:01:15.145417Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T01:01:15.139131Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T01:01:15.427461Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-14T01:01:15.427751Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-644788","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-02-14T01:01:15.427859Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-14T01:01:15.427897Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-14T01:01:15.427989Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-14T01:01:15.427999Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-14T01:01:15.430364Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-02-14T01:01:15.449973Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-14T01:01:15.450092Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-14T01:01:15.450102Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-644788","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [bdf65837da6099dcfd1edddcabc09c01d424315e2e5695bf1c3fd29b37cf49c3] <==
	{"level":"info","ts":"2024-02-14T01:01:38.835289Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-14T01:01:38.835901Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-14T01:01:38.835966Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-14T01:01:38.836457Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-02-14T01:01:38.845755Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T01:01:38.846262Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T01:01:38.846313Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T01:01:38.846523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-02-14T01:01:38.846644Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-02-14T01:01:38.846775Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T01:01:38.846867Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T01:01:40.47496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:40.475195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:40.475247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:40.475291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2024-02-14T01:01:40.47533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-14T01:01:40.475376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2024-02-14T01:01:40.475416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-14T01:01:40.479433Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-644788 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T01:01:40.479596Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T01:01:40.480604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-14T01:01:40.482071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T01:01:40.486064Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T01:01:40.49143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T01:01:40.491528Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:02:01 up  3:44,  0 users,  load average: 1.91, 2.28, 2.00
	Linux pause-644788 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [3a94bffbddc1e5f904a66903ec026de4cb4ae859959a8e5ce5d1d9280ef749f3] <==
	I0214 01:01:44.219447       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0214 01:01:44.219647       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0214 01:01:44.219798       1 main.go:116] setting mtu 1500 for CNI 
	I0214 01:01:44.219839       1 main.go:146] kindnetd IP family: "ipv4"
	I0214 01:01:44.219882       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0214 01:01:44.514960       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0214 01:01:44.515005       1 main.go:227] handling current node
	I0214 01:01:54.538056       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0214 01:01:54.538089       1 main.go:227] handling current node
	
	
	==> kindnet [827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6] <==
	I0214 01:01:13.567967       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0214 01:01:13.585113       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0214 01:01:13.585356       1 main.go:116] setting mtu 1500 for CNI 
	I0214 01:01:13.585430       1 main.go:146] kindnetd IP family: "ipv4"
	I0214 01:01:13.585499       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0214 01:01:13.928124       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0214 01:01:13.928386       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> kube-apiserver [11b70319d2ea555b37f86e7f694f1b341327e72d4fd452196af5c0808ebf58b1] <==
	I0214 01:01:42.948807       1 naming_controller.go:291] Starting NamingConditionController
	I0214 01:01:42.948819       1 establishing_controller.go:76] Starting EstablishingController
	I0214 01:01:42.948838       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0214 01:01:42.948853       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0214 01:01:42.948871       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0214 01:01:43.105404       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 01:01:43.158182       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0214 01:01:43.165531       1 shared_informer.go:318] Caches are synced for configmaps
	I0214 01:01:43.171209       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0214 01:01:43.171324       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0214 01:01:43.171373       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0214 01:01:43.171422       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0214 01:01:43.171452       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0214 01:01:43.173282       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0214 01:01:43.173329       1 aggregator.go:166] initial CRD sync complete...
	I0214 01:01:43.173337       1 autoregister_controller.go:141] Starting autoregister controller
	I0214 01:01:43.173342       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0214 01:01:43.173348       1 cache.go:39] Caches are synced for autoregister controller
	E0214 01:01:43.181696       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0214 01:01:43.869986       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 01:01:45.301228       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0214 01:01:45.438093       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0214 01:01:45.449258       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0214 01:01:45.514561       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 01:01:45.524238       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc] <==
	I0214 01:01:14.124727       1 options.go:220] external host was not specified, using 192.168.76.2
	I0214 01:01:14.130009       1 server.go:148] Version: v1.28.4
	I0214 01:01:14.130049       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [846a400f113c356cc36a27f5b2ed31b64a29be2d889446953b28a99d1c83d1d0] <==
	I0214 01:01:56.040578       1 shared_informer.go:318] Caches are synced for PV protection
	I0214 01:01:56.047380       1 shared_informer.go:318] Caches are synced for taint
	I0214 01:01:56.047633       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0214 01:01:56.047769       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-644788"
	I0214 01:01:56.047861       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0214 01:01:56.047926       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0214 01:01:56.047998       1 taint_manager.go:210] "Sending events to api server"
	I0214 01:01:56.048692       1 event.go:307] "Event occurred" object="pause-644788" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-644788 event: Registered Node pause-644788 in Controller"
	I0214 01:01:56.052061       1 shared_informer.go:318] Caches are synced for GC
	I0214 01:01:56.052269       1 shared_informer.go:318] Caches are synced for PVC protection
	I0214 01:01:56.055529       1 shared_informer.go:318] Caches are synced for deployment
	I0214 01:01:56.059510       1 shared_informer.go:318] Caches are synced for crt configmap
	I0214 01:01:56.065794       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0214 01:01:56.065937       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0214 01:01:56.067331       1 shared_informer.go:318] Caches are synced for daemon sets
	I0214 01:01:56.075614       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0214 01:01:56.076913       1 shared_informer.go:318] Caches are synced for HPA
	I0214 01:01:56.076996       1 shared_informer.go:318] Caches are synced for job
	I0214 01:01:56.099981       1 shared_informer.go:318] Caches are synced for resource quota
	I0214 01:01:56.124964       1 shared_informer.go:318] Caches are synced for endpoint
	I0214 01:01:56.149684       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0214 01:01:56.195559       1 shared_informer.go:318] Caches are synced for resource quota
	I0214 01:01:56.500682       1 shared_informer.go:318] Caches are synced for garbage collector
	I0214 01:01:56.500791       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0214 01:01:56.584728       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314] <==
	
	
	==> kube-proxy [6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257] <==
	
	
	==> kube-proxy [94bb48cca6c1a802e0a08903626abc1c974b7d0a5fcb03476c3a029f626a285a] <==
	I0214 01:01:44.246338       1 server_others.go:69] "Using iptables proxy"
	I0214 01:01:44.267991       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0214 01:01:44.328799       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 01:01:44.331629       1 server_others.go:152] "Using iptables Proxier"
	I0214 01:01:44.331719       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0214 01:01:44.331752       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0214 01:01:44.331812       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0214 01:01:44.332037       1 server.go:846] "Version info" version="v1.28.4"
	I0214 01:01:44.332230       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 01:01:44.332973       1 config.go:188] "Starting service config controller"
	I0214 01:01:44.333041       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0214 01:01:44.333088       1 config.go:97] "Starting endpoint slice config controller"
	I0214 01:01:44.333117       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0214 01:01:44.333594       1 config.go:315] "Starting node config controller"
	I0214 01:01:44.333641       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0214 01:01:44.437870       1 shared_informer.go:318] Caches are synced for node config
	I0214 01:01:44.437906       1 shared_informer.go:318] Caches are synced for service config
	I0214 01:01:44.437950       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556] <==
	
	
	==> kube-scheduler [bb4d5dfa385f80af57587c4d583f35e459ebf2e63902eb5f1881562d5293ca0e] <==
	I0214 01:01:41.505303       1 serving.go:348] Generated self-signed cert in-memory
	W0214 01:01:42.966383       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0214 01:01:42.966531       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 01:01:42.966579       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0214 01:01:42.966614       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 01:01:43.113459       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0214 01:01:43.113556       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 01:01:43.116474       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0214 01:01:43.117035       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 01:01:43.117113       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 01:01:43.117167       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0214 01:01:43.217683       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 01:01:38 pause-644788 kubelet[2996]: I0214 01:01:38.374068    2996 scope.go:117] "RemoveContainer" containerID="9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc"
	Feb 14 01:01:38 pause-644788 kubelet[2996]: I0214 01:01:38.374647    2996 scope.go:117] "RemoveContainer" containerID="b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314"
	Feb 14 01:01:38 pause-644788 kubelet[2996]: I0214 01:01:38.374930    2996 scope.go:117] "RemoveContainer" containerID="1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556"
	Feb 14 01:01:38 pause-644788 kubelet[2996]: I0214 01:01:38.454650    2996 kubelet_node_status.go:70] "Attempting to register node" node="pause-644788"
	Feb 14 01:01:38 pause-644788 kubelet[2996]: E0214 01:01:38.455419    2996 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="pause-644788"
	Feb 14 01:01:38 pause-644788 kubelet[2996]: W0214 01:01:38.692158    2996 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 14 01:01:38 pause-644788 kubelet[2996]: E0214 01:01:38.692238    2996 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 14 01:01:39 pause-644788 kubelet[2996]: I0214 01:01:39.256721    2996 kubelet_node_status.go:70] "Attempting to register node" node="pause-644788"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.149466    2996 kubelet_node_status.go:108] "Node was previously registered" node="pause-644788"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.149579    2996 kubelet_node_status.go:73] "Successfully registered node" node="pause-644788"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.151906    2996 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.152751    2996 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.702917    2996 apiserver.go:52] "Watching apiserver"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.705941    2996 topology_manager.go:215] "Topology Admit Handler" podUID="79232acc-f48d-4b46-8c04-17e044441e02" podNamespace="kube-system" podName="coredns-5dd5756b68-blr8m"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.706057    2996 topology_manager.go:215] "Topology Admit Handler" podUID="2cd1ad76-088c-4810-9812-5fa72cc11eab" podNamespace="kube-system" podName="kindnet-nxl78"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.706123    2996 topology_manager.go:215] "Topology Admit Handler" podUID="c162e76e-4f54-45bb-908d-b3e05565dcad" podNamespace="kube-system" podName="kube-proxy-bnbc8"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.727917    2996 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.777451    2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cd1ad76-088c-4810-9812-5fa72cc11eab-lib-modules\") pod \"kindnet-nxl78\" (UID: \"2cd1ad76-088c-4810-9812-5fa72cc11eab\") " pod="kube-system/kindnet-nxl78"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.777495    2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c162e76e-4f54-45bb-908d-b3e05565dcad-lib-modules\") pod \"kube-proxy-bnbc8\" (UID: \"c162e76e-4f54-45bb-908d-b3e05565dcad\") " pod="kube-system/kube-proxy-bnbc8"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.777532    2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c162e76e-4f54-45bb-908d-b3e05565dcad-xtables-lock\") pod \"kube-proxy-bnbc8\" (UID: \"c162e76e-4f54-45bb-908d-b3e05565dcad\") " pod="kube-system/kube-proxy-bnbc8"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.777556    2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2cd1ad76-088c-4810-9812-5fa72cc11eab-cni-cfg\") pod \"kindnet-nxl78\" (UID: \"2cd1ad76-088c-4810-9812-5fa72cc11eab\") " pod="kube-system/kindnet-nxl78"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.777578    2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cd1ad76-088c-4810-9812-5fa72cc11eab-xtables-lock\") pod \"kindnet-nxl78\" (UID: \"2cd1ad76-088c-4810-9812-5fa72cc11eab\") " pod="kube-system/kindnet-nxl78"
	Feb 14 01:01:44 pause-644788 kubelet[2996]: I0214 01:01:44.006676    2996 scope.go:117] "RemoveContainer" containerID="fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7"
	Feb 14 01:01:44 pause-644788 kubelet[2996]: I0214 01:01:44.007290    2996 scope.go:117] "RemoveContainer" containerID="827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6"
	Feb 14 01:01:44 pause-644788 kubelet[2996]: I0214 01:01:44.007661    2996 scope.go:117] "RemoveContainer" containerID="6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-644788 -n pause-644788
helpers_test.go:261: (dbg) Run:  kubectl --context pause-644788 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-644788
helpers_test.go:235: (dbg) docker inspect pause-644788:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d",
	        "Created": "2024-02-14T00:59:53.316471425Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 627442,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T00:59:53.622272204Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d/hostname",
	        "HostsPath": "/var/lib/docker/containers/184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d/hosts",
	        "LogPath": "/var/lib/docker/containers/184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d/184d19e687298e2ccb7e92e8033d99e1deb122dcb3a7986835793b8172d8495d-json.log",
	        "Name": "/pause-644788",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-644788:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-644788",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/31376664be10f046ea4364990762dbaf8824987595c2779330a9c8db67466de4-init/diff:/var/lib/docker/overlay2/6bce6236d7ba68734b2ab000b848b0bb40e1e541964b0b25c50d016c8f0ef97c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31376664be10f046ea4364990762dbaf8824987595c2779330a9c8db67466de4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31376664be10f046ea4364990762dbaf8824987595c2779330a9c8db67466de4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31376664be10f046ea4364990762dbaf8824987595c2779330a9c8db67466de4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-644788",
	                "Source": "/var/lib/docker/volumes/pause-644788/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-644788",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-644788",
	                "name.minikube.sigs.k8s.io": "pause-644788",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b10e00d4e60323a1a1db11de843012b548f239a775e0297c0b14a363bbdf5e8e",
	            "SandboxKey": "/var/run/docker/netns/b10e00d4e603",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33587"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33586"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33583"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33585"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33584"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-644788": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "184d19e68729",
	                        "pause-644788"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "600f44a1766075a44606adf3483b14b1514603883f2208372d3926330fc0c99f",
	                    "EndpointID": "657d781cfefd0580d9b89cca0e965c9023381a9795b1c43f8db5a7010c3138f1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "pause-644788",
	                        "184d19e68729"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-644788 -n pause-644788
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-644788 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-644788 logs -n 25: (2.339882947s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-564237            | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:55 UTC |
	| start   | -p NoKubernetes-564237            | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:55 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-564237 sudo       | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-564237            | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:55 UTC |
	| start   | -p NoKubernetes-564237            | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:55 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-564237 sudo       | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-564237            | NoKubernetes-564237       | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:55 UTC |
	| start   | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 00:55 UTC | 14 Feb 24 00:56 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-212863         | missing-upgrade-212863    | jenkins | v1.32.0 | 14 Feb 24 00:56 UTC | 14 Feb 24 00:57 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 00:56 UTC | 14 Feb 24 00:56 UTC |
	| start   | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 00:56 UTC | 14 Feb 24 01:01 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-212863         | missing-upgrade-212863    | jenkins | v1.32.0 | 14 Feb 24 00:57 UTC | 14 Feb 24 00:57 UTC |
	| start   | -p stopped-upgrade-055750         | minikube                  | jenkins | v1.26.0 | 14 Feb 24 00:57 UTC | 14 Feb 24 00:57 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --vm-driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-055750 stop       | minikube                  | jenkins | v1.26.0 | 14 Feb 24 00:57 UTC | 14 Feb 24 00:57 UTC |
	| start   | -p stopped-upgrade-055750         | stopped-upgrade-055750    | jenkins | v1.32.0 | 14 Feb 24 00:57 UTC | 14 Feb 24 00:58 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-055750         | stopped-upgrade-055750    | jenkins | v1.32.0 | 14 Feb 24 00:58 UTC | 14 Feb 24 00:58 UTC |
	| start   | -p running-upgrade-905465         | minikube                  | jenkins | v1.26.0 | 14 Feb 24 00:58 UTC | 14 Feb 24 00:59 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --vm-driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p running-upgrade-905465         | running-upgrade-905465    | jenkins | v1.32.0 | 14 Feb 24 00:59 UTC | 14 Feb 24 00:59 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-905465         | running-upgrade-905465    | jenkins | v1.32.0 | 14 Feb 24 00:59 UTC | 14 Feb 24 00:59 UTC |
	| start   | -p pause-644788 --memory=2048     | pause-644788              | jenkins | v1.32.0 | 14 Feb 24 00:59 UTC | 14 Feb 24 01:01 UTC |
	|         | --install-addons=false            |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker        |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p pause-644788                   | pause-644788              | jenkins | v1.32.0 | 14 Feb 24 01:01 UTC | 14 Feb 24 01:01 UTC |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 01:01 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 01:01 UTC | 14 Feb 24 01:01 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-727193      | kubernetes-upgrade-727193 | jenkins | v1.32.0 | 14 Feb 24 01:01 UTC | 14 Feb 24 01:02 UTC |
	| start   | -p force-systemd-flag-117007      | force-systemd-flag-117007 | jenkins | v1.32.0 | 14 Feb 24 01:02 UTC |                     |
	|         | --memory=2048 --force-systemd     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 01:02:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 01:02:00.814653  636361 out.go:291] Setting OutFile to fd 1 ...
	I0214 01:02:00.814844  636361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 01:02:00.814881  636361 out.go:304] Setting ErrFile to fd 2...
	I0214 01:02:00.814901  636361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 01:02:00.815263  636361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 01:02:00.815750  636361 out.go:298] Setting JSON to false
	I0214 01:02:00.824704  636361 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13464,"bootTime":1707859057,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0214 01:02:00.824829  636361 start.go:138] virtualization:  
	I0214 01:02:00.827763  636361 out.go:177] * [force-systemd-flag-117007] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 01:02:00.829769  636361 out.go:177]   - MINIKUBE_LOCATION=18169
	I0214 01:02:00.831953  636361 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 01:02:00.829889  636361 notify.go:220] Checking for updates...
	I0214 01:02:00.834139  636361 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 01:02:00.836535  636361 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	I0214 01:02:00.838930  636361 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 01:02:00.841074  636361 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 01:02:00.843834  636361 config.go:182] Loaded profile config "pause-644788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 01:02:00.843987  636361 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 01:02:00.869331  636361 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 01:02:00.869450  636361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 01:02:01.008930  636361 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-14 01:02:00.990588637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 01:02:01.009044  636361 docker.go:295] overlay module found
	I0214 01:02:01.012395  636361 out.go:177] * Using the docker driver based on user configuration
	I0214 01:02:01.014700  636361 start.go:298] selected driver: docker
	I0214 01:02:01.014745  636361 start.go:902] validating driver "docker" against <nil>
	I0214 01:02:01.014760  636361 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 01:02:01.015460  636361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 01:02:01.154698  636361 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-14 01:02:01.143042836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 01:02:01.154884  636361 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 01:02:01.155147  636361 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 01:02:01.157179  636361 out.go:177] * Using Docker driver with root privileges
	I0214 01:02:01.159119  636361 cni.go:84] Creating CNI manager for ""
	I0214 01:02:01.159150  636361 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 01:02:01.159166  636361 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 01:02:01.159186  636361 start_flags.go:321] config:
	{Name:force-systemd-flag-117007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-117007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 01:02:01.161337  636361 out.go:177] * Starting control plane node force-systemd-flag-117007 in cluster force-systemd-flag-117007
	I0214 01:02:01.163194  636361 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 01:02:01.165169  636361 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 01:02:01.167016  636361 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0214 01:02:01.167080  636361 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0214 01:02:01.167094  636361 cache.go:56] Caching tarball of preloaded images
	I0214 01:02:01.167192  636361 preload.go:174] Found /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0214 01:02:01.167203  636361 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0214 01:02:01.167317  636361 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/force-systemd-flag-117007/config.json ...
	I0214 01:02:01.167336  636361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/force-systemd-flag-117007/config.json: {Name:mk6fe49131e3619f1e6a9bde6000fce9a269e23d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 01:02:01.167524  636361 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 01:02:01.214062  636361 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0214 01:02:01.214084  636361 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0214 01:02:01.214105  636361 cache.go:194] Successfully downloaded all kic artifacts
	I0214 01:02:01.214143  636361 start.go:365] acquiring machines lock for force-systemd-flag-117007: {Name:mk71fae5eb1d0c5f91f0060044e7b156d223adfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 01:02:01.215311  636361 start.go:369] acquired machines lock for "force-systemd-flag-117007" in 1.144132ms
	I0214 01:02:01.215358  636361 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-117007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-117007 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 01:02:01.215443  636361 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.014188281Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-blr8m/coredns" id=2b6c89c7-e7d5-421d-aa90-1fdddf4ad4c5 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.014240334Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.059680002Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/07be62f0e7298d334539e6dcb87d40860031f6d6c382e795d4a42a64ce063b47/merged/etc/passwd: no such file or directory"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.059730193Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/07be62f0e7298d334539e6dcb87d40860031f6d6c382e795d4a42a64ce063b47/merged/etc/group: no such file or directory"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.134974861Z" level=info msg="Created container 3aa4c606a1bc4206d33a1d001f935a220af8a72cc86b8e0aefa1a08f05563813: kube-system/coredns-5dd5756b68-blr8m/coredns" id=2b6c89c7-e7d5-421d-aa90-1fdddf4ad4c5 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.135618287Z" level=info msg="Starting container: 3aa4c606a1bc4206d33a1d001f935a220af8a72cc86b8e0aefa1a08f05563813" id=da4b2fad-62fc-44ec-8eb3-0f141d1db487 name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.138974801Z" level=info msg="Created container 3a94bffbddc1e5f904a66903ec026de4cb4ae859959a8e5ce5d1d9280ef749f3: kube-system/kindnet-nxl78/kindnet-cni" id=1de47d64-a721-4b1d-b1ec-42bbaf171f14 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.139508623Z" level=info msg="Starting container: 3a94bffbddc1e5f904a66903ec026de4cb4ae859959a8e5ce5d1d9280ef749f3" id=745b635e-7ef6-4a9a-8a0f-90f1199113f5 name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.148312601Z" level=info msg="Created container 94bb48cca6c1a802e0a08903626abc1c974b7d0a5fcb03476c3a029f626a285a: kube-system/kube-proxy-bnbc8/kube-proxy" id=b25f79e6-85c5-4bc1-ba6c-59bf9e9d0b88 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.148967981Z" level=info msg="Starting container: 94bb48cca6c1a802e0a08903626abc1c974b7d0a5fcb03476c3a029f626a285a" id=78e7998d-d7b5-499e-b176-8bbd209a37fe name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.154387202Z" level=info msg="Started container" PID=3256 containerID=3aa4c606a1bc4206d33a1d001f935a220af8a72cc86b8e0aefa1a08f05563813 description=kube-system/coredns-5dd5756b68-blr8m/coredns id=da4b2fad-62fc-44ec-8eb3-0f141d1db487 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9c06cd1598992ebca09f7225fc3aca16471488793bbee8edc87dc60b0ce23fe
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.160796045Z" level=info msg="Started container" PID=3234 containerID=3a94bffbddc1e5f904a66903ec026de4cb4ae859959a8e5ce5d1d9280ef749f3 description=kube-system/kindnet-nxl78/kindnet-cni id=745b635e-7ef6-4a9a-8a0f-90f1199113f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=77eeda6873546623eea6b4ee1ac10115368a58b8d2e1d8374c920b394e9bf798
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.168016583Z" level=info msg="Started container" PID=3247 containerID=94bb48cca6c1a802e0a08903626abc1c974b7d0a5fcb03476c3a029f626a285a description=kube-system/kube-proxy-bnbc8/kube-proxy id=78e7998d-d7b5-499e-b176-8bbd209a37fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b25c68a7bf1193bcecf8715eb16c0f700a1cbb92a5fb2b8af867aba2ebae310
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.516904562Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.527498427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.527536499Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.527553393Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.531132208Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.531168696Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.531186640Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.534987829Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.535025974Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.535043098Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.545255993Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 14 01:01:44 pause-644788 crio[2562]: time="2024-02-14 01:01:44.545290709Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3aa4c606a1bc4       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   20 seconds ago      Running             coredns                   2                   b9c06cd159899       coredns-5dd5756b68-blr8m
	94bb48cca6c1a       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   20 seconds ago      Running             kube-proxy                2                   3b25c68a7bf11       kube-proxy-bnbc8
	3a94bffbddc1e       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   20 seconds ago      Running             kindnet-cni               2                   77eeda6873546       kindnet-nxl78
	11b70319d2ea5       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   26 seconds ago      Running             kube-apiserver            2                   7af8c2f321950       kube-apiserver-pause-644788
	846a400f113c3       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   26 seconds ago      Running             kube-controller-manager   2                   e28b841896c87       kube-controller-manager-pause-644788
	bb4d5dfa385f8       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   26 seconds ago      Running             kube-scheduler            2                   501097f7d088d       kube-scheduler-pause-644788
	bdf65837da609       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   26 seconds ago      Running             etcd                      2                   706bddbd852f8       etcd-pause-644788
	fc2fa8ea8932e       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   51 seconds ago      Exited              coredns                   1                   b9c06cd159899       coredns-5dd5756b68-blr8m
	1e27fa7d6c327       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   51 seconds ago      Exited              kube-scheduler            1                   501097f7d088d       kube-scheduler-pause-644788
	9c5669437ab68       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   51 seconds ago      Exited              kube-apiserver            1                   7af8c2f321950       kube-apiserver-pause-644788
	6566d451e3d5e       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   51 seconds ago      Exited              kube-proxy                1                   3b25c68a7bf11       kube-proxy-bnbc8
	827ba3fcdf829       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   51 seconds ago      Exited              kindnet-cni               1                   77eeda6873546       kindnet-nxl78
	b49257c924606       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   51 seconds ago      Exited              kube-controller-manager   1                   e28b841896c87       kube-controller-manager-pause-644788
	378b9e0fb60b3       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   51 seconds ago      Exited              etcd                      1                   706bddbd852f8       etcd-pause-644788
	
	
	==> coredns [3aa4c606a1bc4206d33a1d001f935a220af8a72cc86b8e0aefa1a08f05563813] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40019 - 40649 "HINFO IN 1454937829151704872.3502806487380847386. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023207589s
	
	
	==> coredns [fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:33156 - 52852 "HINFO IN 5168320891635668640.7268165789834317859. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024913489s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-644788
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-644788
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802
	                    minikube.k8s.io/name=pause-644788
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T01_00_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 01:00:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-644788
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 01:02:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 01:01:43 +0000   Wed, 14 Feb 2024 01:00:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 01:01:43 +0000   Wed, 14 Feb 2024 01:00:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 01:01:43 +0000   Wed, 14 Feb 2024 01:00:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 01:01:43 +0000   Wed, 14 Feb 2024 01:01:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-644788
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 c2b8222b29e84a9a8d9facb997f9c4fa
	  System UUID:                2f5f1a86-56ac-4774-a975-dd60e101ebd8
	  Boot ID:                    abc429c2-787e-4b53-ac30-814ea59b0a0f
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-blr8m                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     93s
	  kube-system                 etcd-pause-644788                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         106s
	  kube-system                 kindnet-nxl78                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      93s
	  kube-system                 kube-apiserver-pause-644788             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-controller-manager-pause-644788    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-proxy-bnbc8                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-scheduler-pause-644788             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node pause-644788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node pause-644788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node pause-644788 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     106s                 kubelet          Node pause-644788 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  106s                 kubelet          Node pause-644788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s                 kubelet          Node pause-644788 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           94s                  node-controller  Node pause-644788 event: Registered Node pause-644788 in Controller
	  Normal  NodeReady                63s                  kubelet          Node pause-644788 status is now: NodeReady
	  Normal  Starting                 28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 28s)    kubelet          Node pause-644788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 28s)    kubelet          Node pause-644788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x8 over 28s)    kubelet          Node pause-644788 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9s                   node-controller  Node pause-644788 event: Registered Node pause-644788 in Controller
	
	
	==> dmesg <==
	[  +0.001098] FS-Cache: N-key=[8] '523c5c0100000000'
	[  +0.009444] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=000000001ec5b948
	[  +0.001090] FS-Cache: O-key=[8] '523c5c0100000000'
	[  +0.000791] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000962] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=0000000044435d8b
	[  +0.001054] FS-Cache: N-key=[8] '523c5c0100000000'
	[  +3.160002] FS-Cache: Duplicate cookie detected
	[  +0.000835] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001129] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=0000000086378eab
	[  +0.001313] FS-Cache: O-key=[8] '513c5c0100000000'
	[  +0.000789] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001103] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=0000000073249069
	[  +0.001281] FS-Cache: N-key=[8] '513c5c0100000000'
	[  +0.406244] FS-Cache: Duplicate cookie detected
	[  +0.000751] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001144] FS-Cache: O-cookie d=00000000195ec576{9p.inode} n=00000000fcf6afbd
	[  +0.001081] FS-Cache: O-key=[8] '573c5c0100000000'
	[  +0.000734] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001120] FS-Cache: N-cookie d=00000000195ec576{9p.inode} n=00000000e456345f
	[  +0.001247] FS-Cache: N-key=[8] '573c5c0100000000'
	[Feb14 00:56] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Feb14 00:59] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.381127] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [378b9e0fb60b37985f1e3af2fe6b389e6349e3a17e3466d010eff322ea2a5d15] <==
	{"level":"info","ts":"2024-02-14T01:01:14.047217Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-14T01:01:15.123259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-14T01:01:15.123378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-14T01:01:15.125452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-02-14T01:01:15.125655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:15.125697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:15.125964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:15.126022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:15.128387Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-644788 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T01:01:15.138613Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T01:01:15.138651Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T01:01:15.140905Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-14T01:01:15.138728Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T01:01:15.145417Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T01:01:15.139131Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T01:01:15.427461Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-14T01:01:15.427751Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-644788","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-02-14T01:01:15.427859Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-14T01:01:15.427897Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-14T01:01:15.427989Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-14T01:01:15.427999Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-14T01:01:15.430364Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-02-14T01:01:15.449973Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-14T01:01:15.450092Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-14T01:01:15.450102Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-644788","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [bdf65837da6099dcfd1edddcabc09c01d424315e2e5695bf1c3fd29b37cf49c3] <==
	{"level":"info","ts":"2024-02-14T01:01:38.835289Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-14T01:01:38.835901Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-14T01:01:38.835966Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-14T01:01:38.836457Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-02-14T01:01:38.845755Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T01:01:38.846262Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T01:01:38.846313Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T01:01:38.846523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-02-14T01:01:38.846644Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-02-14T01:01:38.846775Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T01:01:38.846867Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T01:01:40.47496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:40.475195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:40.475247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-14T01:01:40.475291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2024-02-14T01:01:40.47533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-14T01:01:40.475376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2024-02-14T01:01:40.475416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-14T01:01:40.479433Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-644788 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T01:01:40.479596Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T01:01:40.480604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-14T01:01:40.482071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T01:01:40.486064Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T01:01:40.49143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T01:01:40.491528Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:02:05 up  3:44,  0 users,  load average: 2.00, 2.29, 2.01
	Linux pause-644788 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [3a94bffbddc1e5f904a66903ec026de4cb4ae859959a8e5ce5d1d9280ef749f3] <==
	I0214 01:01:44.219447       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0214 01:01:44.219647       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0214 01:01:44.219798       1 main.go:116] setting mtu 1500 for CNI 
	I0214 01:01:44.219839       1 main.go:146] kindnetd IP family: "ipv4"
	I0214 01:01:44.219882       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0214 01:01:44.514960       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0214 01:01:44.515005       1 main.go:227] handling current node
	I0214 01:01:54.538056       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0214 01:01:54.538089       1 main.go:227] handling current node
	I0214 01:02:04.551669       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0214 01:02:04.551701       1 main.go:227] handling current node
	
	
	==> kindnet [827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6] <==
	I0214 01:01:13.567967       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0214 01:01:13.585113       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0214 01:01:13.585356       1 main.go:116] setting mtu 1500 for CNI 
	I0214 01:01:13.585430       1 main.go:146] kindnetd IP family: "ipv4"
	I0214 01:01:13.585499       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0214 01:01:13.928124       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0214 01:01:13.928386       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> kube-apiserver [11b70319d2ea555b37f86e7f694f1b341327e72d4fd452196af5c0808ebf58b1] <==
	I0214 01:01:42.948807       1 naming_controller.go:291] Starting NamingConditionController
	I0214 01:01:42.948819       1 establishing_controller.go:76] Starting EstablishingController
	I0214 01:01:42.948838       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0214 01:01:42.948853       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0214 01:01:42.948871       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0214 01:01:43.105404       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 01:01:43.158182       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0214 01:01:43.165531       1 shared_informer.go:318] Caches are synced for configmaps
	I0214 01:01:43.171209       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0214 01:01:43.171324       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0214 01:01:43.171373       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0214 01:01:43.171422       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0214 01:01:43.171452       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0214 01:01:43.173282       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0214 01:01:43.173329       1 aggregator.go:166] initial CRD sync complete...
	I0214 01:01:43.173337       1 autoregister_controller.go:141] Starting autoregister controller
	I0214 01:01:43.173342       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0214 01:01:43.173348       1 cache.go:39] Caches are synced for autoregister controller
	E0214 01:01:43.181696       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0214 01:01:43.869986       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 01:01:45.301228       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0214 01:01:45.438093       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0214 01:01:45.449258       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0214 01:01:45.514561       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 01:01:45.524238       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc] <==
	I0214 01:01:14.124727       1 options.go:220] external host was not specified, using 192.168.76.2
	I0214 01:01:14.130009       1 server.go:148] Version: v1.28.4
	I0214 01:01:14.130049       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [846a400f113c356cc36a27f5b2ed31b64a29be2d889446953b28a99d1c83d1d0] <==
	I0214 01:01:56.040578       1 shared_informer.go:318] Caches are synced for PV protection
	I0214 01:01:56.047380       1 shared_informer.go:318] Caches are synced for taint
	I0214 01:01:56.047633       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0214 01:01:56.047769       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-644788"
	I0214 01:01:56.047861       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0214 01:01:56.047926       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0214 01:01:56.047998       1 taint_manager.go:210] "Sending events to api server"
	I0214 01:01:56.048692       1 event.go:307] "Event occurred" object="pause-644788" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-644788 event: Registered Node pause-644788 in Controller"
	I0214 01:01:56.052061       1 shared_informer.go:318] Caches are synced for GC
	I0214 01:01:56.052269       1 shared_informer.go:318] Caches are synced for PVC protection
	I0214 01:01:56.055529       1 shared_informer.go:318] Caches are synced for deployment
	I0214 01:01:56.059510       1 shared_informer.go:318] Caches are synced for crt configmap
	I0214 01:01:56.065794       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0214 01:01:56.065937       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0214 01:01:56.067331       1 shared_informer.go:318] Caches are synced for daemon sets
	I0214 01:01:56.075614       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0214 01:01:56.076913       1 shared_informer.go:318] Caches are synced for HPA
	I0214 01:01:56.076996       1 shared_informer.go:318] Caches are synced for job
	I0214 01:01:56.099981       1 shared_informer.go:318] Caches are synced for resource quota
	I0214 01:01:56.124964       1 shared_informer.go:318] Caches are synced for endpoint
	I0214 01:01:56.149684       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0214 01:01:56.195559       1 shared_informer.go:318] Caches are synced for resource quota
	I0214 01:01:56.500682       1 shared_informer.go:318] Caches are synced for garbage collector
	I0214 01:01:56.500791       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0214 01:01:56.584728       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314] <==
	
	
	==> kube-proxy [6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257] <==
	
	
	==> kube-proxy [94bb48cca6c1a802e0a08903626abc1c974b7d0a5fcb03476c3a029f626a285a] <==
	I0214 01:01:44.246338       1 server_others.go:69] "Using iptables proxy"
	I0214 01:01:44.267991       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0214 01:01:44.328799       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 01:01:44.331629       1 server_others.go:152] "Using iptables Proxier"
	I0214 01:01:44.331719       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0214 01:01:44.331752       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0214 01:01:44.331812       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0214 01:01:44.332037       1 server.go:846] "Version info" version="v1.28.4"
	I0214 01:01:44.332230       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 01:01:44.332973       1 config.go:188] "Starting service config controller"
	I0214 01:01:44.333041       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0214 01:01:44.333088       1 config.go:97] "Starting endpoint slice config controller"
	I0214 01:01:44.333117       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0214 01:01:44.333594       1 config.go:315] "Starting node config controller"
	I0214 01:01:44.333641       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0214 01:01:44.437870       1 shared_informer.go:318] Caches are synced for node config
	I0214 01:01:44.437906       1 shared_informer.go:318] Caches are synced for service config
	I0214 01:01:44.437950       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556] <==
	
	
	==> kube-scheduler [bb4d5dfa385f80af57587c4d583f35e459ebf2e63902eb5f1881562d5293ca0e] <==
	I0214 01:01:41.505303       1 serving.go:348] Generated self-signed cert in-memory
	W0214 01:01:42.966383       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0214 01:01:42.966531       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 01:01:42.966579       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0214 01:01:42.966614       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 01:01:43.113459       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0214 01:01:43.113556       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 01:01:43.116474       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0214 01:01:43.117035       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 01:01:43.117113       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 01:01:43.117167       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0214 01:01:43.217683       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 01:01:38 pause-644788 kubelet[2996]: I0214 01:01:38.374068    2996 scope.go:117] "RemoveContainer" containerID="9c5669437ab68463a0fc2e318cbce025a6cd51238fb774cc9b7b8332523d6bbc"
	Feb 14 01:01:38 pause-644788 kubelet[2996]: I0214 01:01:38.374647    2996 scope.go:117] "RemoveContainer" containerID="b49257c924606bc5536e8dbe1de7a7abbc081cce5a8c7695dc1e7108986ae314"
	Feb 14 01:01:38 pause-644788 kubelet[2996]: I0214 01:01:38.374930    2996 scope.go:117] "RemoveContainer" containerID="1e27fa7d6c3273856894c5018a42f23135e90293d04d73b66367aba704ed4556"
	Feb 14 01:01:38 pause-644788 kubelet[2996]: I0214 01:01:38.454650    2996 kubelet_node_status.go:70] "Attempting to register node" node="pause-644788"
	Feb 14 01:01:38 pause-644788 kubelet[2996]: E0214 01:01:38.455419    2996 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="pause-644788"
	Feb 14 01:01:38 pause-644788 kubelet[2996]: W0214 01:01:38.692158    2996 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 14 01:01:38 pause-644788 kubelet[2996]: E0214 01:01:38.692238    2996 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 14 01:01:39 pause-644788 kubelet[2996]: I0214 01:01:39.256721    2996 kubelet_node_status.go:70] "Attempting to register node" node="pause-644788"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.149466    2996 kubelet_node_status.go:108] "Node was previously registered" node="pause-644788"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.149579    2996 kubelet_node_status.go:73] "Successfully registered node" node="pause-644788"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.151906    2996 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.152751    2996 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.702917    2996 apiserver.go:52] "Watching apiserver"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.705941    2996 topology_manager.go:215] "Topology Admit Handler" podUID="79232acc-f48d-4b46-8c04-17e044441e02" podNamespace="kube-system" podName="coredns-5dd5756b68-blr8m"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.706057    2996 topology_manager.go:215] "Topology Admit Handler" podUID="2cd1ad76-088c-4810-9812-5fa72cc11eab" podNamespace="kube-system" podName="kindnet-nxl78"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.706123    2996 topology_manager.go:215] "Topology Admit Handler" podUID="c162e76e-4f54-45bb-908d-b3e05565dcad" podNamespace="kube-system" podName="kube-proxy-bnbc8"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.727917    2996 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.777451    2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cd1ad76-088c-4810-9812-5fa72cc11eab-lib-modules\") pod \"kindnet-nxl78\" (UID: \"2cd1ad76-088c-4810-9812-5fa72cc11eab\") " pod="kube-system/kindnet-nxl78"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.777495    2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c162e76e-4f54-45bb-908d-b3e05565dcad-lib-modules\") pod \"kube-proxy-bnbc8\" (UID: \"c162e76e-4f54-45bb-908d-b3e05565dcad\") " pod="kube-system/kube-proxy-bnbc8"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.777532    2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c162e76e-4f54-45bb-908d-b3e05565dcad-xtables-lock\") pod \"kube-proxy-bnbc8\" (UID: \"c162e76e-4f54-45bb-908d-b3e05565dcad\") " pod="kube-system/kube-proxy-bnbc8"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.777556    2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2cd1ad76-088c-4810-9812-5fa72cc11eab-cni-cfg\") pod \"kindnet-nxl78\" (UID: \"2cd1ad76-088c-4810-9812-5fa72cc11eab\") " pod="kube-system/kindnet-nxl78"
	Feb 14 01:01:43 pause-644788 kubelet[2996]: I0214 01:01:43.777578    2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cd1ad76-088c-4810-9812-5fa72cc11eab-xtables-lock\") pod \"kindnet-nxl78\" (UID: \"2cd1ad76-088c-4810-9812-5fa72cc11eab\") " pod="kube-system/kindnet-nxl78"
	Feb 14 01:01:44 pause-644788 kubelet[2996]: I0214 01:01:44.006676    2996 scope.go:117] "RemoveContainer" containerID="fc2fa8ea8932e17568ca0c859267ebca567afde8b324a2afcdc1257ceff343e7"
	Feb 14 01:01:44 pause-644788 kubelet[2996]: I0214 01:01:44.007290    2996 scope.go:117] "RemoveContainer" containerID="827ba3fcdf8298177b8d5635a80397733a673f909def22a48b84937941109ed6"
	Feb 14 01:01:44 pause-644788 kubelet[2996]: I0214 01:01:44.007661    2996 scope.go:117] "RemoveContainer" containerID="6566d451e3d5e47d7f6ae88a3cabbd1599403f5b1ee931c2350efc119ff04257"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-644788 -n pause-644788
helpers_test.go:261: (dbg) Run:  kubectl --context pause-644788 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (61.00s)

                                                
                                    

Test pass (279/314)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 21.85
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.21
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.28.4/json-events 18
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 16.77
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.21
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 157.19
38 TestAddons/parallel/Registry 16.64
40 TestAddons/parallel/InspektorGadget 10.91
41 TestAddons/parallel/MetricsServer 6.8
44 TestAddons/parallel/CSI 54.12
45 TestAddons/parallel/Headlamp 13.58
46 TestAddons/parallel/CloudSpanner 5.68
47 TestAddons/parallel/LocalPath 51.62
48 TestAddons/parallel/NvidiaDevicePlugin 6.57
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.17
53 TestAddons/StoppedEnableDisable 12.21
54 TestCertOptions 36.46
55 TestCertExpiration 246.39
57 TestForceSystemdFlag 40.13
58 TestForceSystemdEnv 43.23
64 TestErrorSpam/setup 30.36
65 TestErrorSpam/start 0.76
66 TestErrorSpam/status 1
67 TestErrorSpam/pause 1.65
68 TestErrorSpam/unpause 1.81
69 TestErrorSpam/stop 1.41
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 79.3
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 31.01
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.74
81 TestFunctional/serial/CacheCmd/cache/add_local 1.88
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.14
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
89 TestFunctional/serial/ExtraConfig 33.06
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.6
92 TestFunctional/serial/LogsFileCmd 1.71
93 TestFunctional/serial/InvalidService 4.67
95 TestFunctional/parallel/ConfigCmd 0.6
96 TestFunctional/parallel/DashboardCmd 10.9
97 TestFunctional/parallel/DryRun 0.43
98 TestFunctional/parallel/InternationalLanguage 0.19
99 TestFunctional/parallel/StatusCmd 1.25
103 TestFunctional/parallel/ServiceCmdConnect 11.65
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 26.21
107 TestFunctional/parallel/SSHCmd 0.75
108 TestFunctional/parallel/CpCmd 2.11
110 TestFunctional/parallel/FileSync 0.34
111 TestFunctional/parallel/CertSync 2.15
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.77
119 TestFunctional/parallel/License 0.45
120 TestFunctional/parallel/Version/short 0.1
121 TestFunctional/parallel/Version/components 1.2
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
126 TestFunctional/parallel/ImageCommands/ImageBuild 2.82
127 TestFunctional/parallel/ImageCommands/Setup 2.25
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.89
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
133 TestFunctional/parallel/ProfileCmd/profile_list 0.48
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.58
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.14
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.31
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.89
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.28
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.22
152 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
153 TestFunctional/parallel/ServiceCmd/List 0.52
154 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
155 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
156 TestFunctional/parallel/ServiceCmd/Format 0.41
157 TestFunctional/parallel/ServiceCmd/URL 0.41
158 TestFunctional/parallel/MountCmd/any-port 8.65
159 TestFunctional/parallel/MountCmd/specific-port 2.45
160 TestFunctional/parallel/MountCmd/VerifyCleanup 2.56
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestIngressAddonLegacy/StartLegacyK8sCluster 99.24
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.48
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.63
174 TestJSONOutput/start/Command 73.82
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.72
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.65
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.88
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.23
199 TestKicCustomNetwork/create_custom_network 42.9
200 TestKicCustomNetwork/use_default_bridge_network 36.94
201 TestKicExistingNetwork 31.91
202 TestKicCustomSubnet 33.25
203 TestKicStaticIP 36.4
204 TestMainNoArgs 0.06
205 TestMinikubeProfile 65.45
208 TestMountStart/serial/StartWithMountFirst 6.62
209 TestMountStart/serial/VerifyMountFirst 0.26
210 TestMountStart/serial/StartWithMountSecond 6.47
211 TestMountStart/serial/VerifyMountSecond 0.28
212 TestMountStart/serial/DeleteFirst 1.63
213 TestMountStart/serial/VerifyMountPostDelete 0.26
214 TestMountStart/serial/Stop 1.21
215 TestMountStart/serial/RestartStopped 7.58
216 TestMountStart/serial/VerifyMountPostStop 0.27
219 TestMultiNode/serial/FreshStart2Nodes 122.36
220 TestMultiNode/serial/DeployApp2Nodes 5.84
221 TestMultiNode/serial/PingHostFrom2Pods 1.02
222 TestMultiNode/serial/AddNode 50.01
223 TestMultiNode/serial/MultiNodeLabels 0.09
224 TestMultiNode/serial/ProfileList 0.32
225 TestMultiNode/serial/CopyFile 10.42
226 TestMultiNode/serial/StopNode 2.25
227 TestMultiNode/serial/StartAfterStop 12.69
228 TestMultiNode/serial/RestartKeepsNodes 119.83
229 TestMultiNode/serial/DeleteNode 5.02
230 TestMultiNode/serial/StopMultiNode 23.8
231 TestMultiNode/serial/RestartMultiNode 78.17
232 TestMultiNode/serial/ValidateNameConflict 31.49
237 TestPreload 189.68
239 TestScheduledStopUnix 106.79
242 TestInsufficientStorage 10.61
243 TestRunningBinaryUpgrade 81.35
245 TestKubernetesUpgrade 378.96
246 TestMissingContainerUpgrade 161.25
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
249 TestNoKubernetes/serial/StartWithK8s 39.1
250 TestNoKubernetes/serial/StartWithStopK8s 8.26
251 TestNoKubernetes/serial/Start 10.48
252 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
253 TestNoKubernetes/serial/ProfileList 1.09
254 TestNoKubernetes/serial/Stop 1.26
255 TestNoKubernetes/serial/StartNoArgs 7.27
256 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
257 TestStoppedBinaryUpgrade/Setup 1.84
258 TestStoppedBinaryUpgrade/Upgrade 65.88
259 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
268 TestPause/serial/Start 80.88
277 TestNetworkPlugins/group/false 4.98
282 TestStartStop/group/old-k8s-version/serial/FirstStart 119.51
283 TestStartStop/group/old-k8s-version/serial/DeployApp 26.48
284 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
285 TestStartStop/group/old-k8s-version/serial/Stop 12.03
286 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
287 TestStartStop/group/old-k8s-version/serial/SecondStart 443.86
289 TestStartStop/group/no-preload/serial/FirstStart 76.4
290 TestStartStop/group/no-preload/serial/DeployApp 9.35
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
292 TestStartStop/group/no-preload/serial/Stop 11.98
293 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
294 TestStartStop/group/no-preload/serial/SecondStart 627.26
295 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
296 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.13
297 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
298 TestStartStop/group/old-k8s-version/serial/Pause 4.15
300 TestStartStop/group/embed-certs/serial/FirstStart 85.63
301 TestStartStop/group/embed-certs/serial/DeployApp 9.41
302 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.33
303 TestStartStop/group/embed-certs/serial/Stop 12
304 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
305 TestStartStop/group/embed-certs/serial/SecondStart 358.34
306 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
307 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
308 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
309 TestStartStop/group/no-preload/serial/Pause 3.08
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 77.62
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
314 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 605.68
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 7.01
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
319 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
320 TestStartStop/group/embed-certs/serial/Pause 3.14
322 TestStartStop/group/newest-cni/serial/FirstStart 46.19
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
325 TestStartStop/group/newest-cni/serial/Stop 1.24
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
327 TestStartStop/group/newest-cni/serial/SecondStart 30
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
331 TestStartStop/group/newest-cni/serial/Pause 2.92
332 TestNetworkPlugins/group/auto/Start 75.19
333 TestNetworkPlugins/group/auto/KubeletFlags 0.29
334 TestNetworkPlugins/group/auto/NetCatPod 10.28
335 TestNetworkPlugins/group/auto/DNS 0.21
336 TestNetworkPlugins/group/auto/Localhost 0.16
337 TestNetworkPlugins/group/auto/HairPin 0.18
338 TestNetworkPlugins/group/kindnet/Start 77.19
339 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
340 TestNetworkPlugins/group/kindnet/KubeletFlags 0.49
341 TestNetworkPlugins/group/kindnet/NetCatPod 12.36
342 TestNetworkPlugins/group/kindnet/DNS 0.21
343 TestNetworkPlugins/group/kindnet/Localhost 0.34
344 TestNetworkPlugins/group/kindnet/HairPin 0.25
345 TestNetworkPlugins/group/calico/Start 70.17
346 TestNetworkPlugins/group/calico/ControllerPod 6.01
347 TestNetworkPlugins/group/calico/KubeletFlags 0.32
348 TestNetworkPlugins/group/calico/NetCatPod 10.26
349 TestNetworkPlugins/group/calico/DNS 0.21
350 TestNetworkPlugins/group/calico/Localhost 0.16
351 TestNetworkPlugins/group/calico/HairPin 0.16
352 TestNetworkPlugins/group/custom-flannel/Start 64.54
353 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
354 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
355 TestNetworkPlugins/group/custom-flannel/DNS 0.18
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
358 TestNetworkPlugins/group/enable-default-cni/Start 89.47
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
361 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
362 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.98
363 TestNetworkPlugins/group/flannel/Start 68.37
364 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
365 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
366 TestNetworkPlugins/group/flannel/ControllerPod 6.01
367 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
369 TestNetworkPlugins/group/flannel/NetCatPod 9.36
370 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
371 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
372 TestNetworkPlugins/group/flannel/DNS 0.27
373 TestNetworkPlugins/group/flannel/Localhost 0.22
374 TestNetworkPlugins/group/flannel/HairPin 0.27
375 TestNetworkPlugins/group/bridge/Start 86.24
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
377 TestNetworkPlugins/group/bridge/NetCatPod 9.24
378 TestNetworkPlugins/group/bridge/DNS 0.21
379 TestNetworkPlugins/group/bridge/Localhost 0.17
380 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (21.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-842602 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-842602 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (21.853697244s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (21.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-842602
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-842602: exit status 85 (82.66428ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-842602 | jenkins | v1.32.0 | 14 Feb 24 00:17 UTC |          |
	|         | -p download-only-842602        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 00:17:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 00:17:52.829293  504067 out.go:291] Setting OutFile to fd 1 ...
	I0214 00:17:52.829499  504067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:17:52.829520  504067 out.go:304] Setting ErrFile to fd 2...
	I0214 00:17:52.829529  504067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:17:52.829862  504067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	W0214 00:17:52.830033  504067 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18169-498689/.minikube/config/config.json: open /home/jenkins/minikube-integration/18169-498689/.minikube/config/config.json: no such file or directory
	I0214 00:17:52.830508  504067 out.go:298] Setting JSON to true
	I0214 00:17:52.831473  504067 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10816,"bootTime":1707859057,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0214 00:17:52.831543  504067 start.go:138] virtualization:  
	I0214 00:17:52.836989  504067 out.go:97] [download-only-842602] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 00:17:52.839471  504067 out.go:169] MINIKUBE_LOCATION=18169
	W0214 00:17:52.837239  504067 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball: no such file or directory
	I0214 00:17:52.837285  504067 notify.go:220] Checking for updates...
	I0214 00:17:52.841735  504067 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 00:17:52.844099  504067 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 00:17:52.845919  504067 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	I0214 00:17:52.848142  504067 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0214 00:17:52.852977  504067 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 00:17:52.853239  504067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 00:17:52.874104  504067 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 00:17:52.874196  504067 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:17:52.961618  504067 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-14 00:17:52.952654905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:17:52.961714  504067 docker.go:295] overlay module found
	I0214 00:17:52.964435  504067 out.go:97] Using the docker driver based on user configuration
	I0214 00:17:52.964472  504067 start.go:298] selected driver: docker
	I0214 00:17:52.964479  504067 start.go:902] validating driver "docker" against <nil>
	I0214 00:17:52.964604  504067 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:17:53.027406  504067 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-14 00:17:53.018040931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:17:53.027566  504067 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 00:17:53.027872  504067 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0214 00:17:53.028055  504067 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 00:17:53.030617  504067 out.go:169] Using Docker driver with root privileges
	I0214 00:17:53.032742  504067 cni.go:84] Creating CNI manager for ""
	I0214 00:17:53.032769  504067 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 00:17:53.032782  504067 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 00:17:53.032797  504067 start_flags.go:321] config:
	{Name:download-only-842602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-842602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 00:17:53.034615  504067 out.go:97] Starting control plane node download-only-842602 in cluster download-only-842602
	I0214 00:17:53.034647  504067 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 00:17:53.036578  504067 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0214 00:17:53.036603  504067 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0214 00:17:53.036792  504067 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 00:17:53.051381  504067 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 00:17:53.052012  504067 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 00:17:53.052118  504067 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 00:17:53.108293  504067 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0214 00:17:53.108324  504067 cache.go:56] Caching tarball of preloaded images
	I0214 00:17:53.108929  504067 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0214 00:17:53.111163  504067 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0214 00:17:53.111192  504067 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:17:53.229462  504067 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0214 00:17:59.620231  504067 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 00:18:10.051328  504067 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:18:10.051459  504067 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:18:11.062491  504067 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0214 00:18:11.062863  504067 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/download-only-842602/config.json ...
	I0214 00:18:11.062899  504067 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/download-only-842602/config.json: {Name:mka399efb5c37846fea5a49ac6b56ec72b26c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:18:11.064396  504067 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0214 00:18:11.065030  504067 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/18169-498689/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-842602"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-842602
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-857203 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-857203 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (18.002603333s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (18.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-857203
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-857203: exit status 85 (90.3089ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-842602 | jenkins | v1.32.0 | 14 Feb 24 00:17 UTC |                     |
	|         | -p download-only-842602        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:18 UTC |
	| delete  | -p download-only-842602        | download-only-842602 | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:18 UTC |
	| start   | -o=json --download-only        | download-only-857203 | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC |                     |
	|         | -p download-only-857203        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 00:18:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 00:18:15.148622  504227 out.go:291] Setting OutFile to fd 1 ...
	I0214 00:18:15.148808  504227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:18:15.148819  504227 out.go:304] Setting ErrFile to fd 2...
	I0214 00:18:15.148826  504227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:18:15.149091  504227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 00:18:15.149583  504227 out.go:298] Setting JSON to true
	I0214 00:18:15.150543  504227 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10839,"bootTime":1707859057,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0214 00:18:15.150625  504227 start.go:138] virtualization:  
	I0214 00:18:15.153397  504227 out.go:97] [download-only-857203] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 00:18:15.153739  504227 notify.go:220] Checking for updates...
	I0214 00:18:15.156630  504227 out.go:169] MINIKUBE_LOCATION=18169
	I0214 00:18:15.159287  504227 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 00:18:15.161286  504227 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 00:18:15.163070  504227 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	I0214 00:18:15.165040  504227 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0214 00:18:15.169388  504227 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 00:18:15.169700  504227 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 00:18:15.192510  504227 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 00:18:15.192611  504227 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:18:15.275268  504227 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 00:18:15.265458274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:18:15.275389  504227 docker.go:295] overlay module found
	I0214 00:18:15.277496  504227 out.go:97] Using the docker driver based on user configuration
	I0214 00:18:15.277530  504227 start.go:298] selected driver: docker
	I0214 00:18:15.277537  504227 start.go:902] validating driver "docker" against <nil>
	I0214 00:18:15.277648  504227 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:18:15.335176  504227 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 00:18:15.326016894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:18:15.335364  504227 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 00:18:15.335707  504227 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0214 00:18:15.335904  504227 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 00:18:15.338057  504227 out.go:169] Using Docker driver with root privileges
	I0214 00:18:15.339995  504227 cni.go:84] Creating CNI manager for ""
	I0214 00:18:15.340017  504227 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 00:18:15.340029  504227 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 00:18:15.340042  504227 start_flags.go:321] config:
	{Name:download-only-857203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-857203 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 00:18:15.342495  504227 out.go:97] Starting control plane node download-only-857203 in cluster download-only-857203
	I0214 00:18:15.342527  504227 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 00:18:15.344398  504227 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0214 00:18:15.344423  504227 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0214 00:18:15.344600  504227 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 00:18:15.361172  504227 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 00:18:15.361292  504227 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 00:18:15.361316  504227 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0214 00:18:15.361323  504227 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0214 00:18:15.361336  504227 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 00:18:15.423370  504227 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0214 00:18:15.423396  504227 cache.go:56] Caching tarball of preloaded images
	I0214 00:18:15.424021  504227 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0214 00:18:15.426329  504227 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0214 00:18:15.426353  504227 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:18:15.550463  504227 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0214 00:18:28.634387  504227 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:18:28.634493  504227 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:18:29.553009  504227 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0214 00:18:29.553368  504227 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/download-only-857203/config.json ...
	I0214 00:18:29.553402  504227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/download-only-857203/config.json: {Name:mk90d80da37ca01b9520222c70773058ce75a189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:18:29.554076  504227 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0214 00:18:29.554610  504227 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18169-498689/.minikube/cache/linux/arm64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-857203"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-857203
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (16.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-594877 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-594877 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.774376884s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (16.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-594877
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-594877: exit status 85 (88.603965ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-842602 | jenkins | v1.32.0 | 14 Feb 24 00:17 UTC |                     |
	|         | -p download-only-842602           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:18 UTC |
	| delete  | -p download-only-842602           | download-only-842602 | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:18 UTC |
	| start   | -o=json --download-only           | download-only-857203 | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC |                     |
	|         | -p download-only-857203           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:18 UTC |
	| delete  | -p download-only-857203           | download-only-857203 | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC | 14 Feb 24 00:18 UTC |
	| start   | -o=json --download-only           | download-only-594877 | jenkins | v1.32.0 | 14 Feb 24 00:18 UTC |                     |
	|         | -p download-only-594877           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 00:18:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 00:18:33.582284  504391 out.go:291] Setting OutFile to fd 1 ...
	I0214 00:18:33.582470  504391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:18:33.582482  504391 out.go:304] Setting ErrFile to fd 2...
	I0214 00:18:33.582488  504391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:18:33.582768  504391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 00:18:33.583196  504391 out.go:298] Setting JSON to true
	I0214 00:18:33.584096  504391 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10857,"bootTime":1707859057,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0214 00:18:33.584171  504391 start.go:138] virtualization:  
	I0214 00:18:33.587089  504391 out.go:97] [download-only-594877] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 00:18:33.589411  504391 out.go:169] MINIKUBE_LOCATION=18169
	I0214 00:18:33.587359  504391 notify.go:220] Checking for updates...
	I0214 00:18:33.593544  504391 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 00:18:33.595626  504391 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 00:18:33.597406  504391 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	I0214 00:18:33.599550  504391 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0214 00:18:33.603285  504391 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 00:18:33.603584  504391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 00:18:33.623154  504391 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 00:18:33.623256  504391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:18:33.688282  504391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 00:18:33.679475691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:18:33.688378  504391 docker.go:295] overlay module found
	I0214 00:18:33.690363  504391 out.go:97] Using the docker driver based on user configuration
	I0214 00:18:33.690392  504391 start.go:298] selected driver: docker
	I0214 00:18:33.690400  504391 start.go:902] validating driver "docker" against <nil>
	I0214 00:18:33.690504  504391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:18:33.742025  504391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 00:18:33.733602265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:18:33.742182  504391 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 00:18:33.742466  504391 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0214 00:18:33.742624  504391 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 00:18:33.744935  504391 out.go:169] Using Docker driver with root privileges
	I0214 00:18:33.747153  504391 cni.go:84] Creating CNI manager for ""
	I0214 00:18:33.747173  504391 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 00:18:33.747183  504391 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 00:18:33.747204  504391 start_flags.go:321] config:
	{Name:download-only-594877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-594877 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 00:18:33.749787  504391 out.go:97] Starting control plane node download-only-594877 in cluster download-only-594877
	I0214 00:18:33.749805  504391 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 00:18:33.752165  504391 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0214 00:18:33.752191  504391 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0214 00:18:33.752357  504391 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 00:18:33.766501  504391 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 00:18:33.766644  504391 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 00:18:33.766671  504391 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0214 00:18:33.766679  504391 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0214 00:18:33.766689  504391 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 00:18:33.817027  504391 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0214 00:18:33.817058  504391 cache.go:56] Caching tarball of preloaded images
	I0214 00:18:33.817211  504391 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0214 00:18:33.819146  504391 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0214 00:18:33.819172  504391 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:18:33.954895  504391 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:9d8119c6fd5c58f71de57a6fdbe27bf3 -> /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0214 00:18:45.815532  504391 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:18:45.815639  504391 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18169-498689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0214 00:18:46.691744  504391 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0214 00:18:46.692121  504391 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/download-only-594877/config.json ...
	I0214 00:18:46.692157  504391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/download-only-594877/config.json: {Name:mk0320df81c8ed14ed534cdf4805a585bf528122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 00:18:46.692808  504391 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0214 00:18:46.692974  504391 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18169-498689/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-594877"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-594877
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-919425 --alsologtostderr --binary-mirror http://127.0.0.1:44395 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-919425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-919425
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-956081
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-956081: exit status 85 (82.082168ms)

                                                
                                                
-- stdout --
	* Profile "addons-956081" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-956081"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-956081
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-956081: exit status 85 (79.28429ms)

                                                
                                                
-- stdout --
	* Profile "addons-956081" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-956081"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (157.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-956081 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-956081 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m37.189489751s)
--- PASS: TestAddons/Setup (157.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 52.186411ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-d2bdl" [fda90818-b101-4cff-a2bb-f49e44f3b67a] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.016495187s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fvbb5" [156c0724-e6f5-4c89-8612-337af2fb9919] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004682029s
addons_test.go:340: (dbg) Run:  kubectl --context addons-956081 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-956081 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-956081 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.530341746s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 ip
2024/02/14 00:21:45 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wfxll" [7b938b15-6fc7-49ac-ae0b-17ec4827cbd8] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011686168s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-956081
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-956081: (5.895296281s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 6.811306ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-2xwxl" [ea037097-4f57-49dc-a932-6fe9c01e7e65] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004772378s
addons_test.go:415: (dbg) Run:  kubectl --context addons-956081 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 9.218968ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-956081 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-956081 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6c872240-ae3c-40cb-8cd6-eb241ef5483c] Pending
helpers_test.go:344: "task-pv-pod" [6c872240-ae3c-40cb-8cd6-eb241ef5483c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6c872240-ae3c-40cb-8cd6-eb241ef5483c] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003554274s
addons_test.go:584: (dbg) Run:  kubectl --context addons-956081 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-956081 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-956081 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-956081 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-956081 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-956081 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-956081 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f2421ec2-0363-4952-9373-dda829134857] Pending
helpers_test.go:344: "task-pv-pod-restore" [f2421ec2-0363-4952-9373-dda829134857] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f2421ec2-0363-4952-9373-dda829134857] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004035736s
addons_test.go:626: (dbg) Run:  kubectl --context addons-956081 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-956081 delete pod task-pv-pod-restore: (1.186774821s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-956081 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-956081 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-956081 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.908370216s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.12s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-956081 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-956081 --alsologtostderr -v=1: (1.570319666s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-dlnlc" [97ffdcb6-fe07-41d4-9aa7-33d6f271ba72] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-dlnlc" [97ffdcb6-fe07-41d4-9aa7-33d6f271ba72] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-dlnlc" [97ffdcb6-fe07-41d4-9aa7-33d6f271ba72] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.006257256s
--- PASS: TestAddons/parallel/Headlamp (13.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-9qhf7" [7a0a7380-cf5a-499e-a6c4-8df0dbd21c49] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003717177s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-956081
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-956081 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-956081 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956081 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8b0deda7-d3dc-4e90-8537-27f2f8dc7f50] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8b0deda7-d3dc-4e90-8537-27f2f8dc7f50] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8b0deda7-d3dc-4e90-8537-27f2f8dc7f50] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00408196s
addons_test.go:891: (dbg) Run:  kubectl --context addons-956081 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 ssh "cat /opt/local-path-provisioner/pvc-212cb541-02a7-4781-88d4-17a5a71edc4b_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-956081 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-956081 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-956081 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-956081 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.487965229s)
--- PASS: TestAddons/parallel/LocalPath (51.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kw86f" [697d1b46-8c72-42e2-9711-70685bbcb1b3] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004494388s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-956081
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-fshgv" [3d1e397d-c24f-4f2b-8fca-95a3ead6d442] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005311785s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-956081 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-956081 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-956081
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-956081: (11.912621098s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-956081
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-956081
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-956081
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (36.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-520200 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-520200 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.749705978s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-520200 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-520200 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-520200 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-520200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-520200
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-520200: (1.982489011s)
--- PASS: TestCertOptions (36.46s)

                                                
                                    
x
+
TestCertExpiration (246.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-026427 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-026427 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.464500465s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-026427 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-026427 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (24.440949122s)
helpers_test.go:175: Cleaning up "cert-expiration-026427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-026427
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-026427: (2.48233242s)
--- PASS: TestCertExpiration (246.39s)

                                                
                                    
x
+
TestForceSystemdFlag (40.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-117007 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-117007 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.242657572s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-117007 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-117007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-117007
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-117007: (2.502687772s)
--- PASS: TestForceSystemdFlag (40.13s)

                                                
                                    
x
+
TestForceSystemdEnv (43.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-216444 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0214 01:02:33.882266  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-216444 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.628621065s)
helpers_test.go:175: Cleaning up "force-systemd-env-216444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-216444
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-216444: (2.601457847s)
--- PASS: TestForceSystemdEnv (43.23s)

                                                
                                    
x
+
TestErrorSpam/setup (30.36s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-528249 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-528249 --driver=docker  --container-runtime=crio
E0214 00:26:29.723364  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:26:29.730974  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:26:29.741243  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:26:29.761659  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:26:29.801912  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:26:29.882390  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:26:30.042912  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:26:30.363179  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:26:31.004471  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:26:32.284882  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:26:34.845660  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:26:39.966345  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-528249 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-528249 --driver=docker  --container-runtime=crio: (30.364758074s)
--- PASS: TestErrorSpam/setup (30.36s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 pause
E0214 00:26:50.206733  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 stop: (1.21489569s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-528249 --log_dir /tmp/nospam-528249 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18169-498689/.minikube/files/etc/test/nested/copy/504061/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-526497 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0214 00:27:10.686965  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:27:51.647194  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-526497 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.295075447s)
--- PASS: TestFunctional/serial/StartWithProxy (79.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-526497 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-526497 --alsologtostderr -v=8: (31.011075017s)
functional_test.go:659: soft start took 31.01261213s for "functional-526497" cluster.
--- PASS: TestFunctional/serial/SoftStart (31.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-526497 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 cache add registry.k8s.io/pause:3.1: (1.232521053s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 cache add registry.k8s.io/pause:3.3: (1.274870106s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 cache add registry.k8s.io/pause:latest: (1.234129479s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-526497 /tmp/TestFunctionalserialCacheCmdcacheadd_local1378717353/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 cache add minikube-local-cache-test:functional-526497
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 cache add minikube-local-cache-test:functional-526497: (1.407950709s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 cache delete minikube-local-cache-test:functional-526497
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-526497
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-526497 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (305.997155ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 cache reload: (1.089705535s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 kubectl -- --context functional-526497 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-526497 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-526497 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0214 00:29:13.567426  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-526497 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.055679046s)
functional_test.go:757: restart took 33.05578389s for "functional-526497" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-526497 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 logs: (1.601963977s)
--- PASS: TestFunctional/serial/LogsCmd (1.60s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 logs --file /tmp/TestFunctionalserialLogsFileCmd1255469405/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 logs --file /tmp/TestFunctionalserialLogsFileCmd1255469405/001/logs.txt: (1.713191064s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-526497 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-526497
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-526497: exit status 115 (641.634191ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31390 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-526497 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-526497 config get cpus: exit status 14 (88.660696ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-526497 config get cpus: exit status 14 (101.219395ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-526497 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-526497 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 529685: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-526497 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-526497 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (182.96153ms)

                                                
                                                
-- stdout --
	* [functional-526497] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 00:30:21.398661  528885 out.go:291] Setting OutFile to fd 1 ...
	I0214 00:30:21.398847  528885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:30:21.398857  528885 out.go:304] Setting ErrFile to fd 2...
	I0214 00:30:21.398863  528885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:30:21.399144  528885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 00:30:21.399543  528885 out.go:298] Setting JSON to false
	I0214 00:30:21.400629  528885 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11565,"bootTime":1707859057,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0214 00:30:21.400733  528885 start.go:138] virtualization:  
	I0214 00:30:21.403217  528885 out.go:177] * [functional-526497] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 00:30:21.405899  528885 notify.go:220] Checking for updates...
	I0214 00:30:21.407684  528885 out.go:177]   - MINIKUBE_LOCATION=18169
	I0214 00:30:21.409314  528885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 00:30:21.410668  528885 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 00:30:21.412147  528885 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	I0214 00:30:21.413967  528885 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 00:30:21.415599  528885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 00:30:21.418002  528885 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 00:30:21.418523  528885 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 00:30:21.439762  528885 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 00:30:21.439860  528885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:30:21.511592  528885 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-14 00:30:21.502236294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:30:21.511715  528885 docker.go:295] overlay module found
	I0214 00:30:21.513661  528885 out.go:177] * Using the docker driver based on existing profile
	I0214 00:30:21.515632  528885 start.go:298] selected driver: docker
	I0214 00:30:21.515652  528885 start.go:902] validating driver "docker" against &{Name:functional-526497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-526497 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 00:30:21.515773  528885 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 00:30:21.518156  528885 out.go:177] 
	W0214 00:30:21.519821  528885 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0214 00:30:21.521583  528885 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-526497 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-526497 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-526497 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.418567ms)

                                                
                                                
-- stdout --
	* [functional-526497] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 00:30:24.672474  529478 out.go:291] Setting OutFile to fd 1 ...
	I0214 00:30:24.672653  529478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:30:24.672660  529478 out.go:304] Setting ErrFile to fd 2...
	I0214 00:30:24.672667  529478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:30:24.673578  529478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 00:30:24.674030  529478 out.go:298] Setting JSON to false
	I0214 00:30:24.674940  529478 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11568,"bootTime":1707859057,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0214 00:30:24.675012  529478 start.go:138] virtualization:  
	I0214 00:30:24.677478  529478 out.go:177] * [functional-526497] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0214 00:30:24.680011  529478 out.go:177]   - MINIKUBE_LOCATION=18169
	I0214 00:30:24.680076  529478 notify.go:220] Checking for updates...
	I0214 00:30:24.684347  529478 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 00:30:24.686358  529478 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 00:30:24.688168  529478 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	I0214 00:30:24.690158  529478 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 00:30:24.692463  529478 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 00:30:24.694968  529478 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 00:30:24.695502  529478 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 00:30:24.716692  529478 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 00:30:24.716800  529478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:30:24.777690  529478 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-14 00:30:24.768534693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:30:24.777813  529478 docker.go:295] overlay module found
	I0214 00:30:24.779911  529478 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0214 00:30:24.781890  529478 start.go:298] selected driver: docker
	I0214 00:30:24.781912  529478 start.go:902] validating driver "docker" against &{Name:functional-526497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-526497 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 00:30:24.782014  529478 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 00:30:24.784987  529478 out.go:177] 
	W0214 00:30:24.786928  529478 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0214 00:30:24.788799  529478 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-526497 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-526497 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-jlcnq" [275045fd-b804-4df6-b16e-b82a35cbac0c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-jlcnq" [275045fd-b804-4df6-b16e-b82a35cbac0c] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003866443s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32096
functional_test.go:1671: http://192.168.49.2:32096: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-jlcnq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32096
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4a7b04ae-cd38-41c4-9076-0ee36d9c5fcc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004552044s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-526497 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-526497 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-526497 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-526497 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [de3b20bf-cdee-47a4-81ef-81273c2a4e74] Pending
helpers_test.go:344: "sp-pod" [de3b20bf-cdee-47a4-81ef-81273c2a4e74] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [de3b20bf-cdee-47a4-81ef-81273c2a4e74] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004847872s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-526497 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-526497 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-526497 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [13cce75f-e880-44b4-af3b-7bf5d7ccebcd] Pending
helpers_test.go:344: "sp-pod" [13cce75f-e880-44b4-af3b-7bf5d7ccebcd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004906791s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-526497 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh -n functional-526497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 cp functional-526497:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3699908921/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh -n functional-526497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh -n functional-526497 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/504061/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "sudo cat /etc/test/nested/copy/504061/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/504061.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "sudo cat /etc/ssl/certs/504061.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/504061.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "sudo cat /usr/share/ca-certificates/504061.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/5040612.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "sudo cat /etc/ssl/certs/5040612.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/5040612.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "sudo cat /usr/share/ca-certificates/5040612.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-526497 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-526497 ssh "sudo systemctl is-active docker": exit status 1 (402.452258ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-526497 ssh "sudo systemctl is-active containerd": exit status 1 (369.575122ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 version --short
2024/02/14 00:30:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 version -o=json --components: (1.198012175s)
--- PASS: TestFunctional/parallel/Version/components (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-526497 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-526497
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-526497 image ls --format short --alsologtostderr:
I0214 00:30:36.347601  530853 out.go:291] Setting OutFile to fd 1 ...
I0214 00:30:36.347829  530853 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 00:30:36.347856  530853 out.go:304] Setting ErrFile to fd 2...
I0214 00:30:36.347878  530853 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 00:30:36.348161  530853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
I0214 00:30:36.348841  530853 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0214 00:30:36.349056  530853 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0214 00:30:36.349590  530853 cli_runner.go:164] Run: docker container inspect functional-526497 --format={{.State.Status}}
I0214 00:30:36.368267  530853 ssh_runner.go:195] Run: systemctl --version
I0214 00:30:36.368320  530853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-526497
I0214 00:30:36.387411  530853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33402 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/functional-526497/id_rsa Username:docker}
I0214 00:30:36.482366  530853 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-526497 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer  | functional-526497  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/library/nginx                 | latest             | 11deb55301007 | 196MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| docker.io/library/nginx                 | alpine             | d315ef79be32c | 45.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-526497 image ls --format table --alsologtostderr:
I0214 00:30:37.197598  531018 out.go:291] Setting OutFile to fd 1 ...
I0214 00:30:37.197907  531018 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 00:30:37.197936  531018 out.go:304] Setting ErrFile to fd 2...
I0214 00:30:37.197960  531018 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 00:30:37.205518  531018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
I0214 00:30:37.206817  531018 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0214 00:30:37.207044  531018 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0214 00:30:37.209008  531018 cli_runner.go:164] Run: docker container inspect functional-526497 --format={{.State.Status}}
I0214 00:30:37.239049  531018 ssh_runner.go:195] Run: systemctl --version
I0214 00:30:37.239100  531018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-526497
I0214 00:30:37.260414  531018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33402 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/functional-526497/id_rsa Username:docker}
I0214 00:30:37.358701  531018 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-526497 image ls --format json --alsologtostderr:
[{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"d315ef79be32cd8ae44f153a41c42e5e407c04f959074ddb8acc2c26649e2676","repoDigests":["docker.io/library/nginx@sha256:4fb7e44d1af9cdfbd38c4e951e84d528662fa083fd74f03f13cd797dc7c39bee","docker.io/library/nginx@sha256:f2802c2a9d09c7aa3ace27445dfc5656ff24355da28e7b958074a0111e3fc076"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45333355"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"11deb55301007d6bf1db2ce20cb5d12e447541969990af4a03e2af8141ebdbed","repoDigests":["docker.io/library/nginx@sha256:0e1330510a8e57568e7e908b27a50658ae84de9e9f907647cb4628fbc799f938","docker.io/librar
y/nginx@sha256:eb8bb0f063123263a8ad18d90a0268275a9cfa8dc514d83003be1ae74e2afa90"],"repoTags":["docker.io/library/nginx:latest"],"size":"196173506"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"20b
332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-526497"],"size":"34114467"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id
":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181
bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/p
ause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-526497 image ls --format json --alsologtostderr:
I0214 00:30:36.923521  530951 out.go:291] Setting OutFile to fd 1 ...
I0214 00:30:36.923735  530951 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 00:30:36.923750  530951 out.go:304] Setting ErrFile to fd 2...
I0214 00:30:36.923756  530951 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 00:30:36.924045  530951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
I0214 00:30:36.924736  530951 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0214 00:30:36.924912  530951 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0214 00:30:36.925445  530951 cli_runner.go:164] Run: docker container inspect functional-526497 --format={{.State.Status}}
I0214 00:30:36.947978  530951 ssh_runner.go:195] Run: systemctl --version
I0214 00:30:36.948044  530951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-526497
I0214 00:30:36.970210  530951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33402 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/functional-526497/id_rsa Username:docker}
I0214 00:30:37.063736  530951 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-526497 image ls --format yaml --alsologtostderr:
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 11deb55301007d6bf1db2ce20cb5d12e447541969990af4a03e2af8141ebdbed
repoDigests:
- docker.io/library/nginx@sha256:0e1330510a8e57568e7e908b27a50658ae84de9e9f907647cb4628fbc799f938
- docker.io/library/nginx@sha256:eb8bb0f063123263a8ad18d90a0268275a9cfa8dc514d83003be1ae74e2afa90
repoTags:
- docker.io/library/nginx:latest
size: "196173506"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-526497
size: "34114467"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: d315ef79be32cd8ae44f153a41c42e5e407c04f959074ddb8acc2c26649e2676
repoDigests:
- docker.io/library/nginx@sha256:4fb7e44d1af9cdfbd38c4e951e84d528662fa083fd74f03f13cd797dc7c39bee
- docker.io/library/nginx@sha256:f2802c2a9d09c7aa3ace27445dfc5656ff24355da28e7b958074a0111e3fc076
repoTags:
- docker.io/library/nginx:alpine
size: "45333355"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-526497 image ls --format yaml --alsologtostderr:
I0214 00:30:36.610729  530891 out.go:291] Setting OutFile to fd 1 ...
I0214 00:30:36.610986  530891 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 00:30:36.611014  530891 out.go:304] Setting ErrFile to fd 2...
I0214 00:30:36.611034  530891 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 00:30:36.611318  530891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
I0214 00:30:36.611988  530891 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0214 00:30:36.612177  530891 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0214 00:30:36.612747  530891 cli_runner.go:164] Run: docker container inspect functional-526497 --format={{.State.Status}}
I0214 00:30:36.636883  530891 ssh_runner.go:195] Run: systemctl --version
I0214 00:30:36.636935  530891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-526497
I0214 00:30:36.654249  530891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33402 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/functional-526497/id_rsa Username:docker}
I0214 00:30:36.755103  530891 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-526497 ssh pgrep buildkitd: exit status 1 (393.233249ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image build -t localhost/my-image:functional-526497 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 image build -t localhost/my-image:functional-526497 testdata/build --alsologtostderr: (2.199663177s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-526497 image build -t localhost/my-image:functional-526497 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 728f1c02e23
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-526497
--> 1eafdedefd5
Successfully tagged localhost/my-image:functional-526497
1eafdedefd5d5f9382da3e8302516ba4a638f9b85118cdc7440ed6b62a3f055d
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-526497 image build -t localhost/my-image:functional-526497 testdata/build --alsologtostderr:
I0214 00:30:37.236164  531019 out.go:291] Setting OutFile to fd 1 ...
I0214 00:30:37.236764  531019 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 00:30:37.236783  531019 out.go:304] Setting ErrFile to fd 2...
I0214 00:30:37.236793  531019 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 00:30:37.237102  531019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
I0214 00:30:37.238033  531019 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0214 00:30:37.240295  531019 config.go:182] Loaded profile config "functional-526497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0214 00:30:37.241053  531019 cli_runner.go:164] Run: docker container inspect functional-526497 --format={{.State.Status}}
I0214 00:30:37.258099  531019 ssh_runner.go:195] Run: systemctl --version
I0214 00:30:37.258151  531019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-526497
I0214 00:30:37.275851  531019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33402 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/functional-526497/id_rsa Username:docker}
I0214 00:30:37.370897  531019 build_images.go:151] Building image from path: /tmp/build.1968196556.tar
I0214 00:30:37.370974  531019 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0214 00:30:37.380260  531019 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1968196556.tar
I0214 00:30:37.387408  531019 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1968196556.tar: stat -c "%s %y" /var/lib/minikube/build/build.1968196556.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1968196556.tar': No such file or directory
I0214 00:30:37.387440  531019 ssh_runner.go:362] scp /tmp/build.1968196556.tar --> /var/lib/minikube/build/build.1968196556.tar (3072 bytes)
I0214 00:30:37.427411  531019 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1968196556
I0214 00:30:37.436406  531019 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1968196556 -xf /var/lib/minikube/build/build.1968196556.tar
I0214 00:30:37.445676  531019 crio.go:297] Building image: /var/lib/minikube/build/build.1968196556
I0214 00:30:37.445776  531019 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-526497 /var/lib/minikube/build/build.1968196556 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0214 00:30:39.300177  531019 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-526497 /var/lib/minikube/build/build.1968196556 --cgroup-manager=cgroupfs: (1.854374759s)
I0214 00:30:39.300255  531019 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1968196556
I0214 00:30:39.309169  531019 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1968196556.tar
I0214 00:30:39.317750  531019 build_images.go:207] Built localhost/my-image:functional-526497 from /tmp/build.1968196556.tar
I0214 00:30:39.317778  531019 build_images.go:123] succeeded building to: functional-526497
I0214 00:30:39.317783  531019 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.225781735s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-526497
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image load --daemon gcr.io/google-containers/addon-resizer:functional-526497 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 image load --daemon gcr.io/google-containers/addon-resizer:functional-526497 --alsologtostderr: (4.664664069s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.89s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "410.058902ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "71.631409ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "391.65473ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "74.524402ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-526497 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-526497 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-526497 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 527094: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-526497 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-526497 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-526497 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5888afc4-75e2-45f2-88b9-7b07362f4f5e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5888afc4-75e2-45f2-88b9-7b07362f4f5e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004892927s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image load --daemon gcr.io/google-containers/addon-resizer:functional-526497 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 image load --daemon gcr.io/google-containers/addon-resizer:functional-526497 --alsologtostderr: (2.871486614s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.278405437s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-526497
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image load --daemon gcr.io/google-containers/addon-resizer:functional-526497 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 image load --daemon gcr.io/google-containers/addon-resizer:functional-526497 --alsologtostderr: (3.70913609s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-526497 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.0.125 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-526497 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 527706: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image save gcr.io/google-containers/addon-resizer:functional-526497 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image rm gcr.io/google-containers/addon-resizer:functional-526497 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.048995094s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-526497
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 image save --daemon gcr.io/google-containers/addon-resizer:functional-526497 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-526497 image save --daemon gcr.io/google-containers/addon-resizer:functional-526497 --alsologtostderr: (1.181223718s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-526497
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-526497 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-526497 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-qvd8q" [b924ced2-87f0-4b81-96b4-65614dc49712] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-qvd8q" [b924ced2-87f0-4b81-96b4-65614dc49712] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004225923s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 service list -o json
functional_test.go:1490: Took "525.732235ms" to run "out/minikube-linux-arm64 -p functional-526497 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30839
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30839
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-526497 /tmp/TestFunctionalparallelMountCmdany-port222039476/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1707870621780451445" to /tmp/TestFunctionalparallelMountCmdany-port222039476/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1707870621780451445" to /tmp/TestFunctionalparallelMountCmdany-port222039476/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1707870621780451445" to /tmp/TestFunctionalparallelMountCmdany-port222039476/001/test-1707870621780451445
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-526497 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (365.625142ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 14 00:30 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 14 00:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 14 00:30 test-1707870621780451445
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh cat /mount-9p/test-1707870621780451445
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-526497 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [aa61c6a3-fe8b-4de4-94a4-bc75c4273ae3] Pending
helpers_test.go:344: "busybox-mount" [aa61c6a3-fe8b-4de4-94a4-bc75c4273ae3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [aa61c6a3-fe8b-4de4-94a4-bc75c4273ae3] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [aa61c6a3-fe8b-4de4-94a4-bc75c4273ae3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004019001s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-526497 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-526497 /tmp/TestFunctionalparallelMountCmdany-port222039476/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-526497 /tmp/TestFunctionalparallelMountCmdspecific-port25610107/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-526497 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (619.074414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-526497 /tmp/TestFunctionalparallelMountCmdspecific-port25610107/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-526497 ssh "sudo umount -f /mount-9p": exit status 1 (425.834621ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-526497 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-526497 /tmp/TestFunctionalparallelMountCmdspecific-port25610107/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-526497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2481303429/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-526497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2481303429/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-526497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2481303429/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-526497 ssh "findmnt -T" /mount1: exit status 1 (1.017740617s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-526497 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-526497 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-526497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2481303429/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-526497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2481303429/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-526497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2481303429/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.56s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-526497
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-526497
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-526497
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (99.24s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-592927 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0214 00:31:29.720644  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:31:57.408615  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-592927 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m39.240109489s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (99.24s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-592927 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-592927 addons enable ingress --alsologtostderr -v=5: (11.481857892s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-592927 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (73.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-458689 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0214 00:36:08.156870  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:36:29.720512  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-458689 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m13.821547621s)
--- PASS: TestJSONOutput/start/Command (73.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-458689 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-458689 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-458689 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-458689 --output=json --user=testUser: (5.876299594s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-246589 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-246589 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.94148ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba4243d1-7ef4-4666-820c-05a388fac173","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-246589] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a0cdd93-12cb-4956-8c68-a171f905c024","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18169"}}
	{"specversion":"1.0","id":"1e200603-24a4-4e1d-874d-b48d07330487","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"59ca546a-23e2-4e18-b720-3de5f7cc1363","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig"}}
	{"specversion":"1.0","id":"0a932577-9f79-42fd-a1ab-e7754f8e0e96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube"}}
	{"specversion":"1.0","id":"cbd93a4a-c759-40b2-9897-389c088e1f28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"01a3558d-c5a1-4375-9b24-b6ee7f345423","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1fc12f44-7d25-4324-b010-0bcbec894467","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-246589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-246589
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-615955 --network=
E0214 00:37:30.077895  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:37:33.881967  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:37:33.887239  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:37:33.897476  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:37:33.917719  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:37:33.957956  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:37:34.038242  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:37:34.198694  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:37:34.519227  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:37:35.159456  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:37:36.439655  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:37:39.001422  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:37:44.121630  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-615955 --network=: (40.746025349s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-615955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-615955
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-615955: (2.137928115s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.90s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.94s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-132708 --network=bridge
E0214 00:37:54.362665  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:38:14.843474  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-132708 --network=bridge: (34.913563584s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-132708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-132708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-132708: (2.002143604s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.94s)

                                                
                                    
x
+
TestKicExistingNetwork (31.91s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-887943 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-887943 --network=existing-network: (29.863664759s)
helpers_test.go:175: Cleaning up "existing-network-887943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-887943
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-887943: (1.912422546s)
--- PASS: TestKicExistingNetwork (31.91s)

                                                
                                    
x
+
TestKicCustomSubnet (33.25s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-942808 --subnet=192.168.60.0/24
E0214 00:38:55.804275  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-942808 --subnet=192.168.60.0/24: (31.161148237s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-942808 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-942808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-942808
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-942808: (2.0725888s)
--- PASS: TestKicCustomSubnet (33.25s)

                                                
                                    
x
+
TestKicStaticIP (36.4s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-121336 --static-ip=192.168.200.200
E0214 00:39:46.236086  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-121336 --static-ip=192.168.200.200: (34.131656112s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-121336 ip
helpers_test.go:175: Cleaning up "static-ip-121336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-121336
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-121336: (2.09723749s)
--- PASS: TestKicStaticIP (36.40s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (65.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-410708 --driver=docker  --container-runtime=crio
E0214 00:40:13.918053  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:40:17.724778  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-410708 --driver=docker  --container-runtime=crio: (30.145964045s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-413129 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-413129 --driver=docker  --container-runtime=crio: (30.116032925s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-410708
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-413129
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-413129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-413129
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-413129: (1.968297873s)
helpers_test.go:175: Cleaning up "first-410708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-410708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-410708: (1.999871115s)
--- PASS: TestMinikubeProfile (65.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-977065 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-977065 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.622780972s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-977065 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-978927 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-978927 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.472414533s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-978927 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-977065 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-977065 --alsologtostderr -v=5: (1.633373425s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-978927 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-978927
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-978927: (1.209680805s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-978927
E0214 00:41:29.720748  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-978927: (6.582028989s)
--- PASS: TestMountStart/serial/RestartStopped (7.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-978927 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-620812 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0214 00:42:33.882450  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 00:42:52.768761  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 00:43:01.565293  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-620812 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m1.84418324s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-620812 -- rollout status deployment/busybox: (3.737879936s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- exec busybox-5b5d89c9d6-jnp56 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- exec busybox-5b5d89c9d6-sc8bd -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- exec busybox-5b5d89c9d6-jnp56 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- exec busybox-5b5d89c9d6-sc8bd -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- exec busybox-5b5d89c9d6-jnp56 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- exec busybox-5b5d89c9d6-sc8bd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- exec busybox-5b5d89c9d6-jnp56 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- exec busybox-5b5d89c9d6-jnp56 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- exec busybox-5b5d89c9d6-sc8bd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-620812 -- exec busybox-5b5d89c9d6-sc8bd -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-620812 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-620812 -v 3 --alsologtostderr: (49.332835375s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-620812 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp testdata/cp-test.txt multinode-620812:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp multinode-620812:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1722298399/001/cp-test_multinode-620812.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp multinode-620812:/home/docker/cp-test.txt multinode-620812-m02:/home/docker/cp-test_multinode-620812_multinode-620812-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m02 "sudo cat /home/docker/cp-test_multinode-620812_multinode-620812-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp multinode-620812:/home/docker/cp-test.txt multinode-620812-m03:/home/docker/cp-test_multinode-620812_multinode-620812-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m03 "sudo cat /home/docker/cp-test_multinode-620812_multinode-620812-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp testdata/cp-test.txt multinode-620812-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp multinode-620812-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1722298399/001/cp-test_multinode-620812-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp multinode-620812-m02:/home/docker/cp-test.txt multinode-620812:/home/docker/cp-test_multinode-620812-m02_multinode-620812.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812 "sudo cat /home/docker/cp-test_multinode-620812-m02_multinode-620812.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp multinode-620812-m02:/home/docker/cp-test.txt multinode-620812-m03:/home/docker/cp-test_multinode-620812-m02_multinode-620812-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m03 "sudo cat /home/docker/cp-test_multinode-620812-m02_multinode-620812-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp testdata/cp-test.txt multinode-620812-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp multinode-620812-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1722298399/001/cp-test_multinode-620812-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp multinode-620812-m03:/home/docker/cp-test.txt multinode-620812:/home/docker/cp-test_multinode-620812-m03_multinode-620812.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812 "sudo cat /home/docker/cp-test_multinode-620812-m03_multinode-620812.txt"
E0214 00:44:46.235746  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 cp multinode-620812-m03:/home/docker/cp-test.txt multinode-620812-m02:/home/docker/cp-test_multinode-620812-m03_multinode-620812-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 ssh -n multinode-620812-m02 "sudo cat /home/docker/cp-test_multinode-620812-m03_multinode-620812-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-620812 node stop m03: (1.22033489s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-620812 status: exit status 7 (502.283163ms)

                                                
                                                
-- stdout --
	multinode-620812
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-620812-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-620812-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-620812 status --alsologtostderr: exit status 7 (526.237887ms)

                                                
                                                
-- stdout --
	multinode-620812
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-620812-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-620812-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 00:44:49.070651  576965 out.go:291] Setting OutFile to fd 1 ...
	I0214 00:44:49.070816  576965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:44:49.070850  576965 out.go:304] Setting ErrFile to fd 2...
	I0214 00:44:49.070858  576965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:44:49.071125  576965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 00:44:49.071342  576965 out.go:298] Setting JSON to false
	I0214 00:44:49.071381  576965 mustload.go:65] Loading cluster: multinode-620812
	I0214 00:44:49.071498  576965 notify.go:220] Checking for updates...
	I0214 00:44:49.071830  576965 config.go:182] Loaded profile config "multinode-620812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 00:44:49.071842  576965 status.go:255] checking status of multinode-620812 ...
	I0214 00:44:49.072648  576965 cli_runner.go:164] Run: docker container inspect multinode-620812 --format={{.State.Status}}
	I0214 00:44:49.096207  576965 status.go:330] multinode-620812 host status = "Running" (err=<nil>)
	I0214 00:44:49.096236  576965 host.go:66] Checking if "multinode-620812" exists ...
	I0214 00:44:49.096528  576965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-620812
	I0214 00:44:49.112696  576965 host.go:66] Checking if "multinode-620812" exists ...
	I0214 00:44:49.113018  576965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 00:44:49.113067  576965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-620812
	I0214 00:44:49.143914  576965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/multinode-620812/id_rsa Username:docker}
	I0214 00:44:49.243194  576965 ssh_runner.go:195] Run: systemctl --version
	I0214 00:44:49.247558  576965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 00:44:49.258959  576965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 00:44:49.320850  576965 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-14 00:44:49.311222303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 00:44:49.321524  576965 kubeconfig.go:92] found "multinode-620812" server: "https://192.168.58.2:8443"
	I0214 00:44:49.321549  576965 api_server.go:166] Checking apiserver status ...
	I0214 00:44:49.321593  576965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 00:44:49.332546  576965 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1264/cgroup
	I0214 00:44:49.341712  576965 api_server.go:182] apiserver freezer: "6:freezer:/docker/ee59e94b57c040f333dd1eedcc40b3a0ca34a183199e9228c9badd0a8a24ed5a/crio/crio-4bd65154a74a71f8a2f3b94f38e4d8cd04bb2f2b2b75c86af6c759b6a3616ca5"
	I0214 00:44:49.341803  576965 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ee59e94b57c040f333dd1eedcc40b3a0ca34a183199e9228c9badd0a8a24ed5a/crio/crio-4bd65154a74a71f8a2f3b94f38e4d8cd04bb2f2b2b75c86af6c759b6a3616ca5/freezer.state
	I0214 00:44:49.350167  576965 api_server.go:204] freezer state: "THAWED"
	I0214 00:44:49.350198  576965 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0214 00:44:49.359988  576965 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0214 00:44:49.360019  576965 status.go:421] multinode-620812 apiserver status = Running (err=<nil>)
	I0214 00:44:49.360043  576965 status.go:257] multinode-620812 status: &{Name:multinode-620812 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 00:44:49.360067  576965 status.go:255] checking status of multinode-620812-m02 ...
	I0214 00:44:49.360382  576965 cli_runner.go:164] Run: docker container inspect multinode-620812-m02 --format={{.State.Status}}
	I0214 00:44:49.378284  576965 status.go:330] multinode-620812-m02 host status = "Running" (err=<nil>)
	I0214 00:44:49.378311  576965 host.go:66] Checking if "multinode-620812-m02" exists ...
	I0214 00:44:49.378603  576965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-620812-m02
	I0214 00:44:49.393760  576965 host.go:66] Checking if "multinode-620812-m02" exists ...
	I0214 00:44:49.394160  576965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 00:44:49.394214  576965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-620812-m02
	I0214 00:44:49.409759  576965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/18169-498689/.minikube/machines/multinode-620812-m02/id_rsa Username:docker}
	I0214 00:44:49.502646  576965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 00:44:49.515073  576965 status.go:257] multinode-620812-m02 status: &{Name:multinode-620812-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0214 00:44:49.515122  576965 status.go:255] checking status of multinode-620812-m03 ...
	I0214 00:44:49.515531  576965 cli_runner.go:164] Run: docker container inspect multinode-620812-m03 --format={{.State.Status}}
	I0214 00:44:49.531091  576965 status.go:330] multinode-620812-m03 host status = "Stopped" (err=<nil>)
	I0214 00:44:49.531116  576965 status.go:343] host is not running, skipping remaining checks
	I0214 00:44:49.531124  576965 status.go:257] multinode-620812-m03 status: &{Name:multinode-620812-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-620812 node start m03 --alsologtostderr: (11.891300571s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (119.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-620812
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-620812
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-620812: (24.794646579s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-620812 --wait=true -v=8 --alsologtostderr
E0214 00:46:29.721062  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-620812 --wait=true -v=8 --alsologtostderr: (1m34.886591786s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-620812
--- PASS: TestMultiNode/serial/RestartKeepsNodes (119.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-620812 node delete m03: (4.322919578s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-620812 stop: (23.599866634s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-620812 status: exit status 7 (91.637684ms)

                                                
                                                
-- stdout --
	multinode-620812
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-620812-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-620812 status --alsologtostderr: exit status 7 (106.260707ms)

                                                
                                                
-- stdout --
	multinode-620812
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-620812-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 00:47:30.827794  585072 out.go:291] Setting OutFile to fd 1 ...
	I0214 00:47:30.827952  585072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:47:30.827961  585072 out.go:304] Setting ErrFile to fd 2...
	I0214 00:47:30.827967  585072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 00:47:30.828210  585072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 00:47:30.828386  585072 out.go:298] Setting JSON to false
	I0214 00:47:30.828423  585072 mustload.go:65] Loading cluster: multinode-620812
	I0214 00:47:30.828536  585072 notify.go:220] Checking for updates...
	I0214 00:47:30.828828  585072 config.go:182] Loaded profile config "multinode-620812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 00:47:30.828847  585072 status.go:255] checking status of multinode-620812 ...
	I0214 00:47:30.829328  585072 cli_runner.go:164] Run: docker container inspect multinode-620812 --format={{.State.Status}}
	I0214 00:47:30.854311  585072 status.go:330] multinode-620812 host status = "Stopped" (err=<nil>)
	I0214 00:47:30.854337  585072 status.go:343] host is not running, skipping remaining checks
	I0214 00:47:30.854345  585072 status.go:257] multinode-620812 status: &{Name:multinode-620812 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 00:47:30.854369  585072 status.go:255] checking status of multinode-620812-m02 ...
	I0214 00:47:30.854665  585072 cli_runner.go:164] Run: docker container inspect multinode-620812-m02 --format={{.State.Status}}
	I0214 00:47:30.875189  585072 status.go:330] multinode-620812-m02 host status = "Stopped" (err=<nil>)
	I0214 00:47:30.875209  585072 status.go:343] host is not running, skipping remaining checks
	I0214 00:47:30.875217  585072 status.go:257] multinode-620812-m02 status: &{Name:multinode-620812-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-620812 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0214 00:47:33.882085  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-620812 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.446372621s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-620812 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-620812
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-620812-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-620812-m02 --driver=docker  --container-runtime=crio: exit status 14 (88.565583ms)

                                                
                                                
-- stdout --
	* [multinode-620812-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-620812-m02' is duplicated with machine name 'multinode-620812-m02' in profile 'multinode-620812'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-620812-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-620812-m03 --driver=docker  --container-runtime=crio: (29.023232333s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-620812
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-620812: exit status 80 (317.436559ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-620812
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-620812-m03 already exists in multinode-620812-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-620812-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-620812-m03: (2.004891267s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.49s)

                                                
                                    
x
+
TestPreload (189.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-256817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0214 00:49:46.236138  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-256817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.203555487s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-256817 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-256817 image pull gcr.io/k8s-minikube/busybox: (1.743175458s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-256817
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-256817: (5.771105461s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-256817 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0214 00:51:09.278281  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 00:51:29.720329  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-256817 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m23.302384979s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-256817 image list
helpers_test.go:175: Cleaning up "test-preload-256817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-256817
E0214 00:52:33.882482  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-256817: (2.4200387s)
--- PASS: TestPreload (189.68s)

                                                
                                    
x
+
TestScheduledStopUnix (106.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-578449 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-578449 --memory=2048 --driver=docker  --container-runtime=crio: (30.119195872s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-578449 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-578449 -n scheduled-stop-578449
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-578449 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-578449 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-578449 -n scheduled-stop-578449
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-578449
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-578449 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0214 00:53:56.925563  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-578449
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-578449: exit status 7 (80.8193ms)

                                                
                                                
-- stdout --
	scheduled-stop-578449
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-578449 -n scheduled-stop-578449
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-578449 -n scheduled-stop-578449: exit status 7 (76.538247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-578449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-578449
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-578449: (5.000495436s)
--- PASS: TestScheduledStopUnix (106.79s)

                                                
                                    
x
+
TestInsufficientStorage (10.61s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-367112 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-367112 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.135072731s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"30c52e18-6e61-4ac6-8ea3-30831cfed6c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-367112] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2bd64e08-7cdc-4318-8282-1c23c4a2445a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18169"}}
	{"specversion":"1.0","id":"cf1e1f95-dee5-4c1d-9b27-7669a2c9b926","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bcdac6e4-87cf-43a8-8271-7f553fc959b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig"}}
	{"specversion":"1.0","id":"8bd4828c-0f29-40ce-9fc9-e0cdd50a1505","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube"}}
	{"specversion":"1.0","id":"1ab31d1e-e120-47e1-a1d3-659f52a67dc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"27de08ea-ef1c-49b0-a5d5-44cbd51d6529","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5f7644bb-bc35-44fc-9c58-8808ea5faf52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3e2e8f7a-be1e-4239-b59d-0e00b3f9aa97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4c302174-e5cd-45ad-8f95-7e2a7159c8d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9f66a3f-c53c-40b4-af04-64f5f3546895","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fe047ff0-d525-43e4-8bc3-fe8331a369e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-367112 in cluster insufficient-storage-367112","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0825d4b1-3207-4d43-871a-1e6000fd3bb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c388284-f97d-4d0e-ad4d-d43b341e77de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a844bfb-eaed-43be-94d2-cc6fda94ac25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-367112 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-367112 --output=json --layout=cluster: exit status 7 (291.766572ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-367112","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-367112","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 00:54:29.319425  601314 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-367112" does not appear in /home/jenkins/minikube-integration/18169-498689/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-367112 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-367112 --output=json --layout=cluster: exit status 7 (288.96723ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-367112","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-367112","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 00:54:29.606674  601367 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-367112" does not appear in /home/jenkins/minikube-integration/18169-498689/kubeconfig
	E0214 00:54:29.616798  601367 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/insufficient-storage-367112/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-367112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-367112
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-367112: (1.892136823s)
--- PASS: TestInsufficientStorage (10.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1717816229 start -p running-upgrade-905465 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1717816229 start -p running-upgrade-905465 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.086234702s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-905465 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0214 00:59:32.768881  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-905465 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.734555614s)
helpers_test.go:175: Cleaning up "running-upgrade-905465" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-905465
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-905465: (3.007059321s)
--- PASS: TestRunningBinaryUpgrade (81.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (378.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-727193 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-727193 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.744649472s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-727193
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-727193: (3.709975966s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-727193 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-727193 status --format={{.Host}}: exit status 7 (121.323904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-727193 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-727193 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m42.312723432s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-727193 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-727193 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-727193 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (111.530839ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-727193] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-727193
	    minikube start -p kubernetes-upgrade-727193 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7271932 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-727193 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-727193 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-727193 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.365548986s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-727193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-727193
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-727193: (2.492104187s)
--- PASS: TestKubernetesUpgrade (378.96s)

                                                
                                    
x
+
TestMissingContainerUpgrade (161.25s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1296581048 start -p missing-upgrade-212863 --memory=2200 --driver=docker  --container-runtime=crio
E0214 00:54:46.235390  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1296581048 start -p missing-upgrade-212863 --memory=2200 --driver=docker  --container-runtime=crio: (1m22.552548616s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-212863
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-212863: (11.029874714s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-212863
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-212863 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0214 00:56:29.720505  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-212863 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.942149683s)
helpers_test.go:175: Cleaning up "missing-upgrade-212863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-212863
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-212863: (2.331181648s)
--- PASS: TestMissingContainerUpgrade (161.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-564237 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-564237 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (84.35032ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-564237] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-564237 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-564237 --driver=docker  --container-runtime=crio: (38.698526271s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-564237 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-564237 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-564237 --no-kubernetes --driver=docker  --container-runtime=crio: (5.909790502s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-564237 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-564237 status -o json: exit status 2 (325.724193ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-564237","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-564237
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-564237: (2.028245151s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-564237 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-564237 --no-kubernetes --driver=docker  --container-runtime=crio: (10.475960102s)
--- PASS: TestNoKubernetes/serial/Start (10.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-564237 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-564237 "sudo systemctl is-active --quiet service kubelet": exit status 1 (336.451743ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-564237
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-564237: (1.260353609s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-564237 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-564237 --driver=docker  --container-runtime=crio: (7.268546044s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-564237 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-564237 "sudo systemctl is-active --quiet service kubelet": exit status 1 (398.51611ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (65.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.471885460 start -p stopped-upgrade-055750 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0214 00:57:33.882302  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.471885460 start -p stopped-upgrade-055750 --memory=2200 --vm-driver=docker  --container-runtime=crio: (33.471290734s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.471885460 -p stopped-upgrade-055750 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.471885460 -p stopped-upgrade-055750 stop: (2.522480114s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-055750 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-055750 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.880111303s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (65.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-055750
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestPause/serial/Start (80.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-644788 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0214 00:59:46.235409  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-644788 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m20.881350978s)
--- PASS: TestPause/serial/Start (80.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-150357 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-150357 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (217.707197ms)

                                                
                                                
-- stdout --
	* [false-150357] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 01:02:14.237532  639095 out.go:291] Setting OutFile to fd 1 ...
	I0214 01:02:14.237764  639095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 01:02:14.237791  639095 out.go:304] Setting ErrFile to fd 2...
	I0214 01:02:14.237812  639095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 01:02:14.238103  639095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-498689/.minikube/bin
	I0214 01:02:14.238608  639095 out.go:298] Setting JSON to false
	I0214 01:02:14.239721  639095 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13478,"bootTime":1707859057,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0214 01:02:14.239903  639095 start.go:138] virtualization:  
	I0214 01:02:14.243262  639095 out.go:177] * [false-150357] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 01:02:14.246202  639095 out.go:177]   - MINIKUBE_LOCATION=18169
	I0214 01:02:14.248377  639095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 01:02:14.246325  639095 notify.go:220] Checking for updates...
	I0214 01:02:14.250571  639095 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-498689/kubeconfig
	I0214 01:02:14.252741  639095 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-498689/.minikube
	I0214 01:02:14.254914  639095 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 01:02:14.257237  639095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 01:02:14.259551  639095 config.go:182] Loaded profile config "force-systemd-flag-117007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0214 01:02:14.259659  639095 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 01:02:14.281606  639095 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 01:02:14.281737  639095 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 01:02:14.372702  639095 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-14 01:02:14.360171078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 01:02:14.372849  639095 docker.go:295] overlay module found
	I0214 01:02:14.375292  639095 out.go:177] * Using the docker driver based on user configuration
	I0214 01:02:14.377314  639095 start.go:298] selected driver: docker
	I0214 01:02:14.377330  639095 start.go:902] validating driver "docker" against <nil>
	I0214 01:02:14.377344  639095 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 01:02:14.379908  639095 out.go:177] 
	W0214 01:02:14.381521  639095 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0214 01:02:14.383561  639095 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-150357 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-150357" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-150357" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-150357

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150357"

                                                
                                                
----------------------- debugLogs end: false-150357 [took: 4.53654768s] --------------------------------
helpers_test.go:175: Cleaning up "false-150357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-150357
--- PASS: TestNetworkPlugins/group/false (4.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (119.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-221796 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0214 01:04:46.236404  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-221796 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m59.508395715s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (119.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (26.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-221796 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d7c969f0-e2bb-4af7-8930-ab1dcfbe98d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d7c969f0-e2bb-4af7-8930-ab1dcfbe98d3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 26.003608091s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-221796 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (26.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-221796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-221796 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-221796 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-221796 --alsologtostderr -v=3: (12.028457246s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-221796 -n old-k8s-version-221796
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-221796 -n old-k8s-version-221796: exit status 7 (73.730489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-221796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (443.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-221796 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0214 01:06:29.720997  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-221796 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m23.487893382s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-221796 -n old-k8s-version-221796
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (443.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-096404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0214 01:07:33.882052  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 01:07:49.279006  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-096404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m16.398129049s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-096404 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c03885cf-f55e-4c47-b713-5c2b37efac22] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c03885cf-f55e-4c47-b713-5c2b37efac22] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003480697s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-096404 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-096404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-096404 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-096404 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-096404 --alsologtostderr -v=3: (11.978034356s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-096404 -n no-preload-096404
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-096404 -n no-preload-096404: exit status 7 (85.054228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-096404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (627.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-096404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0214 01:09:46.235116  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 01:10:36.926095  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 01:11:29.720732  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 01:12:33.882065  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-096404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m26.907901653s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-096404 -n no-preload-096404
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (627.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-vf2gt" [a90a2d77-f12b-43ef-80ac-ecf0d324bf91] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003193721s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-vf2gt" [a90a2d77-f12b-43ef-80ac-ecf0d324bf91] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003495085s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-221796 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-221796 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-221796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-221796 --alsologtostderr -v=1: (1.190637806s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-221796 -n old-k8s-version-221796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-221796 -n old-k8s-version-221796: exit status 2 (374.846069ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-221796 -n old-k8s-version-221796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-221796 -n old-k8s-version-221796: exit status 2 (461.751312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-221796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-221796 --alsologtostderr -v=1: (1.085329643s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-221796 -n old-k8s-version-221796
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-221796 -n old-k8s-version-221796
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-806897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0214 01:14:46.235617  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-806897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m25.628517768s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-806897 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7ed2efa7-0e7c-494e-91c9-38342e244fbb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7ed2efa7-0e7c-494e-91c9-38342e244fbb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003729085s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-806897 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-806897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-806897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.192816138s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-806897 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-806897 --alsologtostderr -v=3
E0214 01:15:44.117042  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:15:44.122366  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:15:44.132706  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:15:44.153057  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:15:44.193314  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:15:44.273600  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:15:44.434539  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:15:44.755569  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:15:45.396410  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:15:46.677556  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:15:49.237818  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:15:54.358910  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-806897 --alsologtostderr -v=3: (11.998819395s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-806897 -n embed-certs-806897
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-806897 -n embed-certs-806897: exit status 7 (81.140794ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-806897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (358.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-806897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0214 01:16:04.599066  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:16:12.769883  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 01:16:25.079694  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:16:29.720991  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 01:17:06.040270  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:17:33.882187  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 01:18:27.960865  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-806897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m57.797984352s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-806897 -n embed-certs-806897
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (358.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pzh26" [c644f176-8a13-4a7f-83ac-9ac1c3b68b39] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004076072s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pzh26" [c644f176-8a13-4a7f-83ac-9ac1c3b68b39] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00411003s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-096404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-096404 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-096404 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-096404 -n no-preload-096404
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-096404 -n no-preload-096404: exit status 2 (387.741603ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-096404 -n no-preload-096404
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-096404 -n no-preload-096404: exit status 2 (330.825029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-096404 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-096404 -n no-preload-096404
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-096404 -n no-preload-096404
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-556481 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0214 01:19:46.235797  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-556481 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m17.618624497s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-556481 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fcdc900f-0655-4b84-83b6-ffdd465e663f] Pending
helpers_test.go:344: "busybox" [fcdc900f-0655-4b84-83b6-ffdd465e663f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fcdc900f-0655-4b84-83b6-ffdd465e663f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005924744s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-556481 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-556481 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-556481 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.038422153s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-556481 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-556481 --alsologtostderr -v=3
E0214 01:20:44.116795  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-556481 --alsologtostderr -v=3: (11.941127493s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-556481 -n default-k8s-diff-port-556481
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-556481 -n default-k8s-diff-port-556481: exit status 7 (82.623873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-556481 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (605.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-556481 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0214 01:21:11.801201  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:21:29.720319  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-556481 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m5.31554391s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-556481 -n default-k8s-diff-port-556481
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (605.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qlg5q" [b52b6ad5-e635-4c69-9d21-3221eba5305f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qlg5q" [b52b6ad5-e635-4c69-9d21-3221eba5305f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.003941967s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qlg5q" [b52b6ad5-e635-4c69-9d21-3221eba5305f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004278915s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-806897 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-806897 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-806897 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-806897 -n embed-certs-806897
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-806897 -n embed-certs-806897: exit status 2 (319.396651ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-806897 -n embed-certs-806897
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-806897 -n embed-certs-806897: exit status 2 (323.299581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-806897 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-806897 -n embed-certs-806897
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-806897 -n embed-certs-806897
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-050304 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0214 01:22:33.882139  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-050304 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (46.188231637s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-050304 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-050304 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.039729366s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-050304 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-050304 --alsologtostderr -v=3: (1.242350842s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-050304 -n newest-cni-050304
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-050304 -n newest-cni-050304: exit status 7 (81.971791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-050304 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-050304 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0214 01:23:03.832186  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:03.837453  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:03.847727  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:03.868097  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:03.908472  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:03.988797  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:04.149277  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:04.469891  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:05.110616  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:06.391455  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:08.952594  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:14.072846  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:23:24.313312  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-050304 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (29.641218562s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-050304 -n newest-cni-050304
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-050304 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-050304 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-050304 -n newest-cni-050304
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-050304 -n newest-cni-050304: exit status 2 (310.3923ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-050304 -n newest-cni-050304
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-050304 -n newest-cni-050304: exit status 2 (318.2879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-050304 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-050304 -n newest-cni-050304
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-050304 -n newest-cni-050304
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0214 01:23:44.794462  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:24:25.754692  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:24:29.279233  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 01:24:46.235799  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m15.193472178s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-150357 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-150357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pxpcb" [55de7139-b01e-4284-a892-8cfdbed0f141] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pxpcb" [55de7139-b01e-4284-a892-8cfdbed0f141] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003799041s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-150357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0214 01:25:44.117443  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
E0214 01:25:47.675194  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:26:29.720971  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m17.186205986s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ljnjg" [c5eed58b-f7bb-4546-8f45-8d75ed935ffc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006186551s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-150357 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-150357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kzr8x" [d75c1c97-2345-4c69-a9d6-7716e44b6ba6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kzr8x" [d75c1c97-2345-4c69-a9d6-7716e44b6ba6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004055443s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-150357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0214 01:27:33.882225  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/ingress-addon-legacy-592927/client.crt: no such file or directory
E0214 01:28:03.831736  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
E0214 01:28:31.516207  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m10.166604186s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-djzgt" [c7133db9-474c-4ca3-b2ae-ca794ac8f912] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004823253s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-150357 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-150357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wj8bv" [a20e7950-1e0f-4931-a2c4-ae7f9641e77b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wj8bv" [a20e7950-1e0f-4931-a2c4-ae7f9641e77b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004454614s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-150357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0214 01:29:46.235793  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/functional-526497/client.crt: no such file or directory
E0214 01:29:52.069593  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:29:52.074910  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:29:52.085155  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:29:52.105389  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:29:52.145656  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:29:52.226023  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:29:52.386378  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:29:52.706952  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:29:53.348151  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:29:54.628704  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:29:57.189562  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:30:02.310553  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
E0214 01:30:12.551339  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.538352848s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-150357 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-150357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mbpv2" [b34b0307-6fe5-45e4-9d61-29e60230163e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mbpv2" [b34b0307-6fe5-45e4-9d61-29e60230163e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003622215s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-150357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m29.465192105s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7qgt4" [bec07509-d931-4803-a4ad-e593a7264588] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00533001s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7qgt4" [bec07509-d931-4803-a4ad-e593a7264588] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005188078s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-556481 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-556481 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-556481 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-556481 -n default-k8s-diff-port-556481
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-556481 -n default-k8s-diff-port-556481: exit status 2 (444.628671ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-556481 -n default-k8s-diff-port-556481
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-556481 -n default-k8s-diff-port-556481: exit status 2 (416.484571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-556481 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-556481 -n default-k8s-diff-port-556481
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-556481 -n default-k8s-diff-port-556481
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.98s)
E0214 01:33:31.753973  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:31.759240  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:31.769554  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:31.789806  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:31.830181  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:31.910485  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:32.070814  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:32.391975  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:33.032700  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:34.312994  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:36.874016  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:41.994215  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:33:52.234635  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory
E0214 01:34:12.715697  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/calico-150357/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0214 01:31:29.721018  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/addons-956081/client.crt: no such file or directory
E0214 01:31:40.818832  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:31:40.824064  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:31:40.834220  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:31:40.854451  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:31:40.894674  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:31:40.974867  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:31:41.135230  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:31:41.455742  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:31:42.096496  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:31:43.377499  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:31:45.937664  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:31:51.058557  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:32:01.298905  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:32:07.161878  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/old-k8s-version-221796/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m8.369488589s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-150357 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-150357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zdgnm" [8702f855-af3d-4699-88e1-f822b06d320c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0214 01:32:21.780104  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-zdgnm" [8702f855-af3d-4699-88e1-f822b06d320c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004036749s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qv6w6" [14f41949-6599-4490-adf2-fd1d063e1101] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003985291s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-150357 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-150357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-150357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wm564" [3ce346f9-1036-4d7d-94a8-98e35a9cf0d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wm564" [3ce346f9-1036-4d7d-94a8-98e35a9cf0d5] Running
E0214 01:32:35.913539  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/auto-150357/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004126907s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-150357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0214 01:33:02.741144  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
E0214 01:33:03.831549  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/no-preload-096404/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-150357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m26.240949639s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-150357 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-150357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-j9m64" [fafc230e-201c-4e56-9287-8d1138834901] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0214 01:34:24.662186  504061 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-498689/.minikube/profiles/kindnet-150357/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-j9m64" [fafc230e-201c-4e56-9287-8d1138834901] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003587761s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-150357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-150357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (32/314)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-289549 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-289549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-289549
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-118445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-118445
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-150357 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-150357" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-150357" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-150357

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150357"

                                                
                                                
----------------------- debugLogs end: kubenet-150357 [took: 4.427595423s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-150357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-150357
--- SKIP: TestNetworkPlugins/group/kubenet (4.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-150357 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-150357" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-150357

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-150357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150357"

                                                
                                                
----------------------- debugLogs end: cilium-150357 [took: 5.251491992s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-150357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-150357
--- SKIP: TestNetworkPlugins/group/cilium (5.49s)

                                                
                                    
Copied to clipboard